Five security settings in iOS 12 you should change right now

Five security settings in iOS 12 you should change right now

iOS 12, Apple’s latest mobile software for iPhone and iPad, is finally out. The new software packs in a bunch of new security and privacy features you’ve probably already heard about.

Here’s what you need to do to take advantage of the new settings and lock down your device.

1. Turn on USB Restricted Mode to make hacking more difficult

This difficult-to-find new feature prevents any accessories from connecting to your device — like USB cables and headphones — when your iPhone or iPad has been locked for more than an hour. That prevents police and hackers alike from using tools to bypass your lock screen passcode and get your data.

Go to Settings > Touch ID & Passcode and type in your passcode. Then, scroll down and ensure that USB Accessories are not permitted on the lock screen, so make sure the setting is Off. (On an iPhone X, check your Face ID settings instead.)

2. Make sure automatic iOS updates are turned on

Every time your iPhone or iPad updates, it comes with a slew of security patches to prevent crashes or data theft. Yet, how often do you update your phone? Most don’t bother unless it’s a major update. Now, iOS 12 will update your device behind the scenes, saving you downtime. Just make sure you switch it on.

Go to Settings > General > Software Update and turn on automatic updates.

3. Set a stronger device passcode

iOS has gotten better in recent years with passcodes. For years, it was a four-digit code by default, and now it’s six-digits. That makes it far more difficult to run through every combination — known as brute-forcing.

But did you know that you can set a number-only code of any length? Eight-digits, twelve — even more — and it keeps the number keypad on the lock screen so you don’t have to fiddle around with the keyboard.

Go to Settings > Touch ID & Passcode and enter your passcode. Then, go to Change password and, from the options, set a Custom Numeric Code.

4. Now, switch on two-factor authentication

Two-factor is one of the best ways to keep your account safe. If someone steals your password, they still need your phone to break into your account. For years, two-factor has been cumbersome and annoying. Now, iOS 12 has a new feature that auto-fills the code, so it takes the frustration step out of the equation — so you have no excuse.

You may be asked to switch on two-factor when you set up your phone. You can also go to Settings and tap your name, then go to Password & Security. Just tap Turn on Two-Factor Authentication and follow the prompts.

5. While you’re here… change your reused passwords

iOS 12’s password manager has a new feature: password auditing. If it finds you’ve used the same password on multiple sites, it will warn you and advise you to change those passwords. It prevents password reuse attacks (known as “credential stuffing“) that hackers use to break into multiple sites and services using the same username and password.

Go to Settings > Passwords & Accounts > Website & App Passwords and enter your passcode. You’ll see a small warning symbol next to each account that recognizes a reused password. One tap of the Change Password on Website button and you’re done.

Source: Mobile – Techcruch

AnchorFree, maker of Hotspot Shield, raises $295 million in new funding

AnchorFree, maker of Hotspot Shield, raises 5 million in new funding

AnchorFree, a maker of a popular virtual private networking app, has raised $295 million in a new round of funding, the company announced Wednesday.

The Redwood City, Calif.-based app maker’s flagship app Hotspot Shield ranks as one of the most popular VPN apps on the market. The app, based on a freemium model, allows users across the world tunnel their internet connections through AnchorFree’s servers, which masks users’ browsing histories from their internet providers and allows those under oppressive regimes evade state-level censorship.

The app has 650 million users in 190 countries, the company said, and also has a business-focused offering.

The funding was led by WndrCo, a holding company focusing on consumer tech businesses, in addition to Accel Partners, 8VC, SignalFire, and Green Bay Ventures, among others.

“The WndrCo team brings deep operational experience in launching and scaling global tech products, and we look forward to working closely with them in pursuit of our mission to provide secure access to the world’s information for every person on the planet,” said AnchorFree’s chief executive David Gorodyansky in remarks.

The news was first reported by The New York Times.

Source: Mobile – Techcruch

Apple will require all apps to have a privacy policy as of October 3

Apple will require all apps to have a privacy policy as of October 3

Apple is cracking down on apps that don’t communicate to users how their personal data is used, secured or shared. In an announcement posted to developers through the App Store Connect portal, Apple says that all apps, including those still in testing, will be required to have a privacy policy as of October 3, 2018.

Allowing apps without privacy policies is something of an obvious hole that Apple should have already plugged, given its generally protective nature over user data. But the change is even more critical now that Europe’s GDPR regulations have gone into effect. Though the app makers themselves would be ultimately responsible for their customers’ data, Apple, as the platform where those apps are hosted, has some responsibility here, too.

Platforms today are being held accountable for the behavior of their apps, and the data misuse that may occur as a result of their own policies around those apps.

Facebook CEO Mark Zuckerberg, for example, was dragged before the U.S. Senate about the Cambridge Analytica scandal, where data from 87 million Facebook users was inappropriately obtained by way of Facebook apps.

Apple’s new requirement, therefore, provides the company with a layer of protection – any app that falls through the cracks going forward will be able to be held accountable by way of its own privacy policy and the statements it contains.

Apple also notes that the privacy policy’s link or text cannot be changed until the developer submits a new version of their app. It seems there’s still a bit of loophole here, though – if developers add a link pointing to an external webpage, they can change what the webpage says at any time after their app is approved.

The new policy will be required for all apps and app updates across the App Store as well as through the TestFlight testing platform as of October 3, says Apple.

What’s not clear is if Apple itself will be reviewing all the privacy policies themselves as part of this change, in order to reject apps with questionable data use policies or user protections. If it does, App Store review times could increase, unless the company hires more staff.

Apple has already taken a stance on apps it finds questionable, like Facebook’s data-sucking VPN app Onavo, which it kicked out of the App Store earlier this month. The app had been live for years, however, and its App Store text did disclose the data it collected was shared with Facebook. The fact that Apple only booted it now seems to indicate it will take a tougher stance on apps which are designed to collect user data as one of their primary functions going forward.

Source: Mobile – Techcruch

Security researchers found a way to hack into the Amazon Echo

Security researchers found a way to hack into the Amazon Echo
Hackers at DefCon have exposed new security concerns around smart speakers. Tencent’s Wu HuiYu and Qian Wenxiang spoke at the security conference with a presentation called Breaking Smart Speakers: We are Listening to You, explaining how they hacked into an Amazon Echo speaker and turned it into a spy bug.
The hack involved a modified Amazon Echo, which had parts swapped out, including some that had been soldered on. The modified Echo was then used to hack into other, non-modified Echos by connecting both the hackers’ Echo and a regular Echo to the same LAN.
This allowed the hackers to turn their own, modified Echo into a listening bug, relaying audio from the other Echo speakers without those speakers indicating that they were transmitting.
This method was very difficult to execute, but represents an early step in exploiting Amazon’s increasingly popular smart speaker.
The researchers notified Amazon of the exploit before the presentation, and Amazon has already pushed a patch, according to Wired.
Still, the presentation demonstrates how one Echo, with malicious firmware, could potentially alter a group of speakers when connected to the same network, posing concerns with the idea of Echos in hotels.
Wired explained how the networking feature of the Echo allowed for the hack:
If they can then get that doctored Echo onto the same Wi-Fi network as a target device, the hackers can take advantage of a software component of Amazon’s speakers, known as Whole Home Audio Daemon, that the devices use to communicate with other Echoes in the same network. That daemon contained a vulnerability that the hackers found they could exploit via their hacked Echo to gain full control over the target speaker, including the ability to make the Echo play any sound they chose, or more worryingly, silently record and transmit audio to a faraway spy.
An Amazon spokesperson told Wired that “customers do not need to take any action as their devices have been automatically updated with security fixes,” adding that “this issue would have required a malicious actor to have physical access to a device and the ability to modify the device hardware.”
To be clear, the actor would only need physical access to their own Echo to execute the hack.
While Amazon has dismissed concerns that its voice activated devices are monitoring you, hackers at this year’s DefCon proved that they can.

Source: Gadgets – techcrunch

Keepsafe launches a privacy-focused mobile browser

Keepsafe launches a privacy-focused mobile browser

Keepsafe, the company behind the private photo app of the same name, is expanding its product lineup today with the release of a mobile web browser.

Co-founder and CEO Zouhair Belkoura argued that all of Keepsafe’s products (which also include a VPN app and a private phone number generator) are united not just by a focus on privacy, but by a determination to make those features simple and easy-to-understand — in contrast to what Belkoura described as “how security is designed in techland,” with lots of jargon and complicated settings.

Plus, when it comes to your online activity, Belkoura said there are different levels of privacy. There’s the question of the government and large tech companies accessing our personal data, which he argued people care about intellectually, but “they don’t really care about it emotionally.”

Then there’s “the nosy neighbor problem,” which Belkoura suggested is something people feel more strongly about: “A billion people are using Gmail and it’s scanning all their email [for advertising], but if I were to walk up to you and say, ‘Hey, can I read your email?’ you’d be like, ‘No, that’s kind of weird, go away.’ ”

It looks like Keepsafe is trying to tackle both kinds of privacy with its browser. For one thing, you can lock the browser with a PIN (it also supports Touch ID, Face ID and Android Fingerprint).

Keepsafe browser tabs

Then once you’re actually browsing, you can either do it in normal tabs, where social, advertising and analytics trackers are blocked (you can toggle which kinds of trackers are affected), but cookies and caching are still allowed — so you stay logged in to websites, and other session data is retained. But if you want an additional layer of privacy, you can open a private tab, where everything gets forgotten as soon as you close it.

While you can get some of these protections just by turning on private/incognito mode in a regular browser, Belkoura said there’s a clarity for consumers when an app is designed specifically for privacy, and the app is part of a broader suite of privacy-focused products. In addition, he said he’s hoping to build meaningful integrations between the different Keepsafe products.

Keepsafe Browser is available for free on iOS and Android.

When asked about monetization, Belkoura said, “I don’t think that the private browser per se is a good place to directly monetize … I’m more interested in saying this is part of the Keepsafe suite and there are other parts of the Keepsafe Suite that we’ll charge you money for.”

Source: Mobile – Techcruch

Verizon and others call a conditional halt on sharing location with data brokers

Verizon and others call a conditional halt on sharing location with data brokers

Verizon is cutting off access to its mobile customers’ real-time locations to two third-party data brokers “to prevent misuse of that information going forward.” The company announced the decision in a letter sent to Senator Ron Wyden (D-OR), who along with others helped reveal improper usage and poor security at these location brokers. It is not, however, getting out of the location-sharing business altogether.

(Update: AT&T and Sprint have also begun the process of ending their location aggregation services — with a caveat, of which below.)

Verizon sold bulk access to its customers’ locations to the brokers in question, LocationSmart and Zumigo, which then turned around and resold that data to dozens of other companies. This isn’t necessarily bad — there are tons of times when location is necessary to provide a service the customer asks for, and supposedly that customer would have to okay the sharing of that data. (Disclosure: Verizon owns Oath, which owns TechCrunch. This does not affect our coverage.)

That doesn’t seem to have been the case at LocationSmart customer Securus, which was selling its data directly to law enforcement so they could find mobile customers quickly and without all that fuss about paperwork and warrants. And then it was found that LocationSmart had exposed an API that allowed anyone to request mobile locations freely and anonymously, and without collecting consent.

When these facts were revealed by security researchers and Sen. Wyden, Verizon immediately looked into it, they reported in a letter sent to the Senator.

“We conducted a comprehensive review of our location aggregator program,” wrote Verizon CTO Karen Zacharia. “As a result of this review, we are initiating a process to terminate our existing agreements for the location aggregator program.”

“We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices,” she wrote later in the letter. In other words, the program is on ice until it can be secured.

Although Verizon claims to have “girded” the system with “mechanisms designed to protect against misuse of our customers’ location data,” the abuses in question clearly slipped through the cracks. Perhaps most notable is the simple fact that Verizon itself does not seem to need to be informed whether a customer has consented to having their location polled. That collection is the responsibility of “the aggregator or corporate customer.”

In other words, Verizon doesn’t need to ask the customer, and the company it sells the data to wholesale doesn’t need to ask the customer — the requirement devolves to the company buying access from the wholesaler. In Securus’s case, it had abstracted things one step further, allowing law enforcement full access when it said it had authority to do so, but apparently without checking, AT&T wrote in its own letter to Sen. Wyden.

And there were 75 other corporate customers. Don’t worry, someone is keeping track of them. Right?

These processes are audited, Verizon wrote, but apparently not an audit that finds things like the abuse by Securus or a poorly secured API. Perhaps how this happened is among the “number of internal questions” raised by the review.

When asked for comment, a Verizon representative offered the following statement:

When these issues were brought to our attention, we took immediate steps to stop it. Customer privacy and security remain a top priority for our customers and our company. We stand-by that commitment to our customers.

And indeed while the program itself appears to have been run with a laxity that should be alarming to all those customers for whom Verizon claims to be so concerned, some of the company’s competitors have yet to take similar action. AT&T, T-Mobile and Sprint were also named by LocationSmart as partners. Their own letters to Sen. Wyden stressed that their systems were similar to the others, with similar safeguards (that were similarly eluded).

In a press release announcing that his pressure on Verizon had borne fruit, Sen. Wyden called on the others to step up:

Verizon deserves credit for taking quick action to protect its customers’ privacy and security. After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.

AT&T actually announced that it is ending its agreements as well, after Sen. Wyden’s call to action was published, and Sprint followed shortly afterwards. AT&T said it “will be ending [its] work with these aggregators for these services as soon as is practical in a way that preserves important, potential lifesaving services like emergency roadside assistance.” Sprint stopped working with LocationSmart last month and is now “beginning the process of terminating its current contracts with data aggregators to whom we provide location data.”

What’s missing from these statements? Among other things: what and how many companies they’re working with, whether they’ll pursue future contracts, and what real changes will be made to prevent future problems like this. Since they’ve been at this for a long time and have had a month to ponder their next course of actions, I don’t think it’s unreasonable to expect more than a carefully worded statement about “these aggregators for these services.”

T-Mobile CEO John Legere tweeted that the company “will not sell customer location data to shady middlemen.” Of course, that doesn’t really mean anything. I await substantive promises from the company pertaining to this “pledge.”

The FCC, meanwhile, has announced that it is looking into the issue — with the considerable handicap that Chairman Ajit Pai represented Securus back in 2012 when he was working as a lawyer. Sen. Wyden has called on him to recuse himself, but that has yet to happen.

I’ve asked Verizon for further clarification on its arrangements and plans, specifically whether it has any other location-sharing agreements in place with other companies. These aren’t, after all, the only players in the game.

Source: Mobile – Techcruch

Purdue’s PHADE technology lets cameras ‘talk’ to you

Purdue’s PHADE technology lets cameras ‘talk’ to you
It’s become almost second nature to accept that cameras everywhere — from streets, to museums and shops — are watching you, but now they may be able to communicate with you, as well. New technology from Purdue University computer science researchers has made this dystopian prospect a reality in a new paper published today. But, they argue, it’s safer than you might think.
The system is called PHADE, which allows for something called “private human addressing,” where camera systems and individual cell phones can communicate without transmitting any personal data, like an IP or Mac address. Instead of using an IP or Mac address, the technology relies on motion patterns for the address code. That way, even if a hacker intercepts it, they won’t be able to access the person’s physical location.
Imagine you’re strolling through a museum and an unfamiliar painting catches your eye. The docents are busy with a tour group far across the gallery and you didn’t pay extra for the clunky recorder and headphones for an audio tour. While pondering the brushwork you feel your phone buzz, and suddenly a detailed description of the artwork and its painter is in the palm of your hand.
To achieve this effect, researchers use an approach similar to the kind of directional audio experience you might find at theme parks. Through processing the live video data, the technology is able to identify the individual motion patterns of pedestrians and when they are within a pertinent range — say, in front of a painting. From there they can broadcast a packet of information linked to the motion address of the pedestrian. When the user’s phone identifies that the motion address matches their own, the message is received.
While this tech can be used to better inform the casual museum-goer, the researchers also believe it has a role in protecting pedestrians from crime in their area.
“Our system serves as a bridge to connect surveillance cameras and people,” He Wang, a co-creator of the technology and assistant professor of computer science, said in a statement. “[It can] be used by government agencies to enhance public safety [by deploying] cameras in high-crime or high-accident areas and warn[ing] specific users about potential threats, such as suspicious followers.”
While the benefits of an increasingly interconnected world are still being debated and critiqued daily, there might just be an upside to knowing a camera’s got its eye on you.

Source: Gadgets – techcrunch

Students confront the unethical side of tech in ‘Designing for Evil’ course

Students confront the unethical side of tech in ‘Designing for Evil’ course
Whether it’s surveilling or deceiving users, mishandling or selling their data, or engendering unhealthy habits or thoughts, tech these days is not short on unethical behavior. But it isn’t enough to just say “that’s creepy.” Fortunately, a course at the University of Washington is equipping its students with the philosophical insights to better identify — and fix — tech’s pernicious lack of ethics.
“Designing for Evil” just concluded its first quarter at UW’s Information School, where prospective creators of apps and services like those we all rely on daily learn the tools of the trade. But thanks to Alexis Hiniker, who teaches the class, they are also learning the critical skill of inquiring into the moral and ethical implications of those apps and services.
What, for example, is a good way of going about making a dating app that is inclusive and promotes healthy relationships? How can an AI imitating a human avoid unnecessary deception? How can something as invasive as China’s proposed citizen scoring system be made as user-friendly as it is possible to be?
I talked to all the student teams at a poster session held on UW’s campus, and also chatted with Hiniker, who designed the course and seemed pleased at how it turned out.
The premise is that the students are given a crash course in ethical philosophy that acquaints them with influential ideas, such as utilitarianism and deontology.

“It’s designed to be as accessible to lay people as possible,” Hiniker told me. “These aren’t philosophy students — this is a design class. But I wanted to see what I could get away with.”
The primary text is Harvard philosophy professor Michael Sandel’s popular book Justice, which Hiniker felt combined the various philosophies into a readable, integrated format. After ingesting this, the students grouped up and picked an app or technology that they would evaluate using the principles described, and then prescribe ethical remedies.
As it turned out, finding ethical problems in tech was the easy part — and fixes for them ranged from the trivial to the impossible. Their insights were interesting, but I got the feeling from many of them that there was a sort of disappointment at the fact that so much of what tech offers, or how it offers it, is inescapably and fundamentally unethical.
I found the students fell into one of three categories.
Not fundamentally unethical (but could use an ethical tune-up)
WebMD is of course a very useful site, but it was plain to the students that it lacked inclusivity: its symptom checker is stacked against non-English-speakers and those who might not know the names of symptoms. The team suggested a more visual symptom reporter, with a basic body map and non-written symptom and pain indicators.
Hello Barbie, the doll that chats back to kids, is certainly a minefield of potential legal and ethical violations, but there’s no reason it can’t be done right. With parental consent and careful engineering it will be in line with privacy laws, but the team said that it still failed some tests of keeping the dialogue with kids healthy and parents informed. The scripts for interaction, they said, should be public — which is obvious in retrospect — and audio should be analyzed on device rather than in the cloud. Lastly, a set of warning words or phrases indicating unhealthy behaviors could warn parents of things like self-harm while keeping the rest of the conversation secret.

WeChat Discover allows users to find others around them and see recent photos they’ve taken — it’s opt-in, which is good, but it can be filtered by gender, promoting a hookup culture that the team said is frowned on in China. It also obscures many user controls behind multiple layers of menus, which may cause people to share location when they don’t intend to. Some basic UI fixes were proposed by the students, and a few ideas on how to combat the possibility of unwanted advances from strangers.
Netflix isn’t evil, but its tendency to promote binge-watching has robbed its users of many an hour. This team felt that some basic user-set limits like two episodes per day, or delaying the next episode by a certain amount of time, could interrupt the habit and encourage people to take back control of their time.
Fundamentally unethical (fixes are still worth making)
FakeApp is a way to face-swap in video, producing convincing fakes in which a politician or friend appears to be saying something they didn’t. It’s fundamentally deceptive, of course, in a broad sense, but really only if the clips are passed on as genuine. Watermarks visible and invisible, as well as controlled cropping of source videos, were this team’s suggestion, though ultimately the technology won’t yield to these voluntary mitigations. So really, an informed populace is the only answer. Good luck with that!
China’s “social credit” system is not actually, the students argued, absolutely unethical — that judgment involves a certain amount of cultural bias. But I’m comfortable putting it here because of the massive ethical questions it has sidestepped and dismissed on the road to deployment. Their highly practical suggestions, however, were focused on making the system more accountable and transparent. Contest reports of behavior, see what types of things have contributed to your own score, see how it has changed over time, and so on.
Tinder’s unethical nature, according to the team, was based on the fact that it was ostensibly about forming human connections but is very plainly designed to be a meat market. Forcing people to think of themselves as physical objects first and foremost in pursuit of romance is not healthy, they argued, and causes people to devalue themselves. As a countermeasure, they suggested having responses to questions or prompts be the first thing you see about a person. You’d have to swipe based on that before seeing any pictures. I suggested having some deal-breaker questions you’d have to agree on, as well. It’s not a bad idea, though open to gaming (like the rest of online dating).
Fundamentally unethical (fixes are essentially impossible)
The League, on the other hand, was a dating app that proved intractable to ethical guidelines. Not only was it a meat market, but it was a meat market where people paid to be among the self-selected “elite” and could filter by ethnicity and other troubling categories. Their suggestions of removing the fee and these filters, among other things, essentially destroyed the product. Unfortunately, The League is an unethical product for unethical people. No amount of tweaking will change that.
Duplex was taken on by a smart team that nevertheless clearly only started their project after Google I/O. Unfortunately, they found that the fundamental deception intrinsic in an AI posing as a human is ethically impermissible. It could, of course, identify itself — but that would spoil the entire value proposition. But they also asked a question I didn’t think to ask myself in my own coverage: why isn’t this AI exhausting all other options before calling a human? It could visit the site, send a text, use other apps and so on. AIs in general should default to interacting with websites and apps first, then to other AIs, then and only then to people — at which time it should say it’s an AI.

To me the most valuable part of all these inquiries was learning what hopefully becomes a habit: to look at the fundamental ethical soundness of a business or technology and be able to articulate it.
That may be the difference in a meeting between being able to say something vague and easily blown off, like “I don’t think that’s a good idea,” and describing a specific harm and reason why that harm is important — and perhaps how it can be avoided.
As for Hiniker, she has some ideas for improving the course should it be approved for a repeat next year. A broader set of texts, for one: “More diverse writers, more diverse voices,” she said. And ideally it could even be expanded to a multi-quarter course so that the students get more than a light dusting of ethics.
With any luck the kids in this course (and any in the future) will be able to help make those choices, leading to fewer Leagues and Duplexes and more COPPA-compliant smart toys and dating apps that don’t sabotage self-esteem.

Source: Gadgets – techcrunch

This family’s Echo sent a private conversation to a random contact

This family’s Echo sent a private conversation to a random contact
A Portland family tells KIRO news that their Echo recorded and then sent a private conversation to someone on its list of contacts without telling them. Amazon called it an “extremely rare occurrence.” (And provided a more detailed explanation, below.)
Portlander Danielle said that she got a call from one of her husband’s employees one day telling her to “unplug your Alexa devices right now,” and suggesting she’d been hacked. He said that he had received recordings of the couple talking about hardwood floors, which Danielle confirmed.
Amazon, when she eventually got hold of the company, had an engineer check the logs, and he apparently discovered what they said was true. In a statement, Amazon said, “We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future.”

Can your smart home be used against you in court?

What could have happened? It seems likely that the Echo’s voice recognition service misheard something, interpreting it as instructions to record the conversation like a note or message. And then it apparently also misheard them say to send the recording to this particular person. And it did all this without saying anything back.
The house reportedly had multiple Alexa devices, so it’s also possible that the system decided to ask for confirmation on the wrong device — saying “All right, I’ve sent that to Steve” on the living room Echo because the users’ voices carried from the kitchen. Or something.
Naturally no one expects to have their conversations sent out to an acquaintance, but it must also be admitted that the Echo is, fundamentally, a device that listens to every conversation you have and constantly sends that data to places on the internet. It also remembers more stuff now. If something does go wrong, “sending your conversation somewhere it isn’t supposed to go” seems a pretty reasonable way for it to happen.
Update: I asked Amazon for more details on what happened, and after this article was published it issued the following explanation, which more or less confirms how I suspected this went down:
Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right”. As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Source: Gadgets – techcrunch

FBI reportedly overestimated inaccessible encrypted phones by thousands

FBI reportedly overestimated inaccessible encrypted phones by thousands
The FBI seems to have been caught fibbing again on the topic of encrypted phones. FBI director Christopher Wray estimated in December that it had almost 7,800 phones from 2017 alone that investigators were unable to access. The real number is likely less than a quarter of that, The Washington Post reports.
Internal records cited by sources put the actual number of encrypted phones at perhaps 1,200 but perhaps as many as 2,000, and the FBI told the paper in a statement that “initial assessment is that programming errors resulted in significant over-counting of mobile devices reported.” Supposedly having three databases tracking the phones led to devices being counted multiple times.
Such a mistake would be so elementary that it’s hard to conceive of how it would be possible. These aren’t court notes, memos or unimportant random pieces of evidence, they’re physical devices with serial numbers and names attached. The idea that no one thought to check for duplicates before giving a number to the director for testimony in Congress suggests either conspiracy or gross incompetence.

Inquiry finds FBI sued Apple to unlock phone without considering all options

The latter seems more likely after a report by the Office of the Inspector General that found the FBI had failed to utilize its own resources to access locked phones, instead suing Apple and then hastily withdrawing the case when its basis (a locked phone from a terror attack) was removed. It seems to have chosen to downplay or ignore its own capabilities in order to pursue the narrative that widespread encryption is dangerous without a backdoor for law enforcement.
An audit is underway at the Bureau to figure out just how many phones it actually has that it can’t access, and hopefully how this all happened.
It is unmistakably among the FBI’s goals to emphasize the problem of devices being fully encrypted and inaccessible to authorities, a trend known as “going dark.” That much it has said publicly, and it is a serious problem for law enforcement. But it seems equally unmistakable that the Bureau is happy to be sloppy, deceptive or both in its advancement of a tailored narrative.

Source: Gadgets – techcrunch