Does Google’s Duplex violate two-party consent laws?

Does Google’s Duplex violate two-party consent laws?

Google’s Duplex, which calls businesses on your behalf and imitates a real human, ums and ahs included, has sparked a bit of controversy among privacy advocates. Doesn’t Google recording a person’s voice and sending it to a data center for analysis violate two-party consent law, which requires everyone in a conversation to agree to being recorded? The answer isn’t immediately clear, and Google’s silence isn’t helping.

Let’s take California’s law as the example, since that’s the state where Google is based and where it used the system. Penal Code section 632 forbids recording any “confidential communication” (defined more or less as any non-public conversation) without the consent of all parties. (The Reporters Committee for the Freedom of the Press has a good state-by-state guide to these laws.)

Google has provided very little in the way of details about how Duplex actually works, so attempting to answer this question involves a certain amount of informed speculation.

To begin with I’m going to consider all phone calls as “confidential” for the purposes of the law. What constitutes a reasonable expectation of privacy is far from settled, and some will have it that you there isn’t such an expectation when making an appointment with a salon. But what about a doctor’s office, or if you need to give personal details over the phone? Though some edge cases may qualify as public, it’s simpler and safer (for us and for Google) to treat all phone conversations as confidential.

As a second assumption, it seems clear that, like most Google services, Duplex’s work takes place in a data center somewhere, not locally on your device. So fundamentally there is a requirement in the system that the other party’s audio will be recorded and sent in some form to that data center for processing, at which point a response is formulated and spoken.

On its face it sounds bad for Google. There’s no way the system is getting consent from whomever picks up the phone. That would spoil the whole interaction — “This call is being conducted by a Google system using speech recognition and synthesis; your voice will be analyzed at Google data centers. Press 1 or say ‘I consent’ to consent.” I would have hung up after about two words. The whole idea is to mask the fact that it’s an AI system at all, so getting consent that way won’t work.

But there’s wiggle room as far as the consent requirement in how the audio is recorded, transmitted and stored. After all, there are systems out there that may have to temporarily store a recording of a person’s voice without their consent — think of a VoIP call that caches audio for a fraction of a second in case of packet loss. There’s even a specific cutout in the law for hearing aids, which if you think about it do in fact do “record” private conversations. Temporary copies produced as part of a legal, beneficial service aren’t the target of this law.

This is partly because the law is about preventing eavesdropping and wiretapping, not preventing any recorded representation of conversation whatsoever that isn’t explicitly authorized. Legislative intent is important.

“There’s a little legal uncertainty there, in the sense of what degree of permanence is required to constitute eavesdropping,” said Mason Kortz, of Harvard’s Berkman Klein Center for Internet & Society. “The big question is what is being sent to the data center and how is it being retained. If it’s retained in the condition that the original conversation is understandable, that’s a violation.”

For instance, Google could conceivably keep a recording of the call, perhaps for AI training purposes, perhaps for quality assurance, perhaps for users’ own records (in case of time slot dispute at the salon, for example). They do retain other data along these lines.

But it would be foolish. Google has an army of lawyers and consent would have been one of the first things they tackled in the deployment of Duplex. For the onstage demos it would be simple enough to collect proactive consent from the businesses they were going to contact. But for actual use by consumers the system needs to engineered with the law in mind.

What would a functioning but legal Duplex look like? The conversation would likely have to be deconstructed and permanently discarded immediately after intake, the way audio is cached in a device like a hearing aid or a service like digital voice transmission.

A closer example of this is Amazon, which might have found itself in violation of COPPA, a law protecting children’s data, whenever a kid asked an Echo to play a Raffi song or do long division. The FTC decided that as long as Amazon and companies in that position immediately turn the data into text and then delete it afterwards, no harm and, therefore, no violation. That’s not an exact analogue to Google’s system, but it is nonetheless instructive.

“It may be possible with careful design to extract the features you need without keeping the original, in a way where it’s mathematically impossible to recreate the recording,” Kortz said.

If that process is verifiable and there’s no possibility of eavesdropping — no chance any Google employee, law enforcement officer or hacker could get into the system and intercept or collect that data — then potentially Duplex could be deemed benign, transitory recording in the eye of the law.

That assumes a lot, though. Frustratingly, Google could clear this up with a sentence or two. It’s suspicious that the company didn’t address this obvious question with even a single phrase, like Sundar Pichai adding during the presentation that “yes, we are compliant with recording consent laws.” Instead of people wondering if, they’d be wondering how. And of course we’d all still be wondering why.

We’ve reached out to Google multiple times on various aspects of this story, but for a company with such talkative products, they sure clammed up fast.

Source: Mobile – Techcruch

Bell & Ross releases a new watch for travelers

Bell & Ross releases a new watch for travelers
In my endless quest to get geeks interested in watches I present to you the Bell & Ross BR V2-93 GMT 24H, a new GMT watch from one of my favorite manufacturers that is a great departure from the company’s traditional designs.
The watch is a 41mm round GMT, which means it has three hands to show the time in the 12-hour scale and another separate hand that shows the time in a 24-hour scale. You can use it to see time zones in two or even three places and it comes in a nice satin-brushed metal case with a rubber or metal strap.
B&R is unique because it’s one of the first companies to embrace online sales after selling primarily in watch stores for about a decade. This means the watches are slightly cheaper — this one is $3,500 — and jewelers can’t really jack up the prices in stores. Further, B&R has a great legacy of making legible, usable watches, and this one is no exception. It is also a fascinating addition to the line. B&R has an Instrument series, which consists of large, square watches with huge numerals, and a Vintage series that hearkens back to WWII-inspired, smaller watches. This one sits firmly in the middle, taking on the clear lines of the Instrument inside a more vintage case.
Ultimately watches like this one are nice tool watches — designed for legibility and usability above fashion. It’s a nice addition to the line and looks like something a proper geek could wear in lieu of Apple Watches and other nerd jewelry. Here’s hoping.

Source: Gadgets – techcrunch

Snapchat Spectacles tests non-circular landscape exports

Snapchat Spectacles tests non-circular landscape exports
The worst thing about Spectacles is how closely tied they are to Snapchat. The proprietary circular photo and video format looks great inside Snapchat where you can tip your phone around while always staying full screen, but it gets reduced to a small circle with a big white border when you export it to your phone for sharing elsewhere.
Luckily, Snapchat has started beta testing new export formats for Spectacles through the beta version of its app. This lets you choose a black border instead of a white one, but importantly, also a horizontal 16:9 rectangular format that would fit well on YouTube and other traditional video players. The test was first spotted by Erik Johnson, and, when asked, a Snapchat spokesperson told TechCrunch “I can confirm we’re testing it, yes.”
Allowing Spectacles to be more compatible with other services could make the v2 of its $150 photo and video-recording sunglasses much more convenient and popular. I actually ran into the Snapchat Spectacles team this weekend at the FORM Arcosanti music festival in Arizona where they were testing the new Specs and looking for ideas for their next camera. I suggested open sourcing the circular format or partnering so other apps could show it natively with the swivel effect, and Snap declined to comment about that. But now it looks like they’re embracing compatibility by just letting you ditch the proprietary format.
Breaking away from purely vertical or circular formats is also a bit of a coup for Snapchat, which has touted vertical as the media orientation of the future as that’s how we hold our phones. Many other apps, including Facebook’s Snapchat clones, adopted this idea. But with Snapchat’s growth slipping to its lowest rate ever, it may need to think about new ways to gain exposure elsewhere.

Seeing Spectacles content on other apps without ugly borders could draw attention back to Snapchat, or at least help Spectacles sell better than v1, which only sold 220,000 pairs and had to write-off hundreds of thousands more that were gathering dust in warehouses. While it makes sense why Snap might have wanted to keep the best Spectacles content viewing experience on its own app, without user growth, that’s proven a software limitation for what’s supposed to be a camera company.

Snapchat launches Spectacles V2, camera glasses you’ll actually wear

Source: Gadgets – techcrunch

Monzo, the U.K. challenger bank, finally rolls out Apple Pay

Monzo, the U.K. challenger bank, finally rolls out Apple Pay

Monzo, the U.K. challenger bank, has finally added Apple Pay to its mobile-only current account. The just over three year-old fintech says it has been one of the most requested features for its banking app, with over 2,000 mentions of Apple Pay on Monzo’s forum, whilst its customer support team have been asked about the functionality more than 13,000 times. In other words, the rollout can’t come soon enough. Noteworthy, Monzo was able to add Google Pay all the way back in October 2017.

Meanwhile, many of its passionate and vocal users will be wondering what took Monzo so long (as an aside, rival challenger Starling was able to add Apple Pay in July 2017). The upstart bank, which usually makes a virtue of its community-driven approach and transparency hasn’t been able to say (or even fully acknowledge that the feature was coming), likely because Apple imposes strict rules on the ways its partners communicate working with the tech giant. And when you sign an NDA with Apple it’s not atypical for it to stipulate that you don’t talk about said NDA.

What we do know is that — similar to Apple’s iOS App Store when submitting an app — the Apple Pay approval process for a new bank partner is not for the faint-hearted. Industry insiders tell me that Google Pay has fewer hurdles to jump in comparison.

Now that the feature is live, Monzo is talking up the security and privacy aspect of using Apple Pay, noting that when you use a credit or debit card with Apple Pay, the actual card numbers are not stored on the device, nor on Apple servers. Instead, “a unique Device Account Number is assigned, encrypted and securely stored in the Secure Element on your device… [and] each transaction is authorised with a one-time unique dynamic security code”.

Of course, most people simply like Apple Pay for its convenience, letting you use your phone to pay rather than fumbling for a debit or credit card, and when shopping online not having to repeatedly enter card details.

Cue Monzo’s Tom Blomfield waxing lyrical in a company statement about Apple’s design and UX. “Apple is famous for building beautiful products with simple, intuitive interfaces. Their design thinking has long been a source of inspiration for us. Monzo’s mission has alway been to make sure everyone can use and manage their money effortlessly, and with Apple Pay we are one step closer to achieving that,” says the challenger bank’s co-founder and CEO.

Source: Mobile – Techcruch

This jolly little robot gets goosebumps

This jolly little robot gets goosebumps
Cornell researchers have made a little robot that can express its emotions through touch, sending out little spikes when it’s scared or even getting goosebumps to express delight or excitement. The prototype, a cute smiling creature with rubber skin, is designed to test touch as an I/O system for robotic projects.

The robot mimics the skin of octopi which can turn spiky when threatened.
The researchers, Yuhan Hu, Zhengnan Zhao, Abheek Vimal and Guy Hoffman, created the robot to experiment with new methods for robot interaction. They compare the skin to “human goosebumps, cats’ neck fur raising, dogs’ back hair, the needles of a porcupine, spiking of a blowfish, or a bird’s ruffled feathers.”
“Research in human-robot interaction shows that a robot’s ability to use nonverbal behavior to communicate affects their potential to be useful to people, and can also have psychological effects. Other reasons include that having a robot use nonverbal behaviors can help make it be perceived as more familiar and less machine-like,” the researchers told IEEE Spectrum.
The skin has multiple configurations and is powered by a computer-controlled elastomer that can inflate and deflate on demand. The goosebumps pop up to match the expression on the robot’s face, allowing humans to better understand what the robot “means” when it raises its little hackles or gets bumpy. I, for one, welcome our bumpy robotic overlords.

Source: Gadgets – techcrunch

To make Stories global, Facebook adds Archive and audio posts

To make Stories global, Facebook adds Archive and audio posts

Facebook’s future rests on convincing the developing world to adopt Stories. But just because the slideshow format will soon surpass feed sharing doesn’t mean people use them the same way everywhere. So late last year, Facebook sent a team to India to learn what features they’d need to embrace Stories across a variety of local languages on phones without much storage.

Today, Facebook will start rolling out three big Stories features in India, which will come to the rest of the world shortly after. First, to lure posts from users who don’t want to type or have a non-native language keyboard, as well as micropodcasters, Facebook Stories will allow audio posts combining a voice message with a colored background or photo.

Facebook Stories will get an Archive similar to Instagram Stories that automatically saves your clips privately after they expire so you can go back to check them out or re-share the content to the News Feed. And finally, Facebook will let Stories users privately Save their clips from the Facebook Camera directly to the social network instead of their phone in case they don’t have enough space.

Facebook Stories Archive

“We know that the performance and reliability of viewing and posting Stories is extremely important to people around the world, especially those with slower connections” Facebook’s director of Stories Connor Hayes tells me. “We are always working on ways to improve the experience of viewing Stories on all types of connections, and have been investing here — especially on our FB Lite app.”

Facebook has a big opportunity to capitalize on Snapchat’s failure to focus on the international market. Plagued by Android engineering problems and initial reluctance to court users beyond U.S. teens, Snapchat left the door open for Facebook’s Stories products to win the globe. Now Snapchat has sunk to its slowest growth rate ever, hitting 191 million daily users despite shrinking in March. Meanwhile, WhatsApp Status, its clone of Snapchat Stories has 450 million daily users, while Instagram Stories has over 300 million.

As for Facebook Stories, it was initially seen as a bit of a ghost town but more and more of my friends are posting there, in part thanks to the ability to syndicate you Instagram Stories there. Facebook Stories has never announced a user count, and Hayes says “We don’t have anything to share yet, but performance of Facebook Stories is encouraging, and we’ve learned a lot about how we can make the experience even better.” Facebook is hell-bent on making Stories work on its own app after launching the in mid-2017, and seems to believe users who find them needless or redundant will come around eventually.

My concern about the global rise of Stories is that instead of only recording the biggest highlights of our lives to capture with our phones, we’re increasingly interrupting all our activities and exiting the present to thrust our phone in the air.

That’s one thing Facebook hopes to fix here, Facebook’s director of Stories Connor Hayes tells me. “Saving photos and videos can be used to save what you might want to post later – So you don’t have to edit or post them while you’re out with your friends, and instead enjoy the moment at the concert and share them later.” You’re still injecting technology into your experience, though, so I hope we can all learn to record as subtly as possible without disturbing the memory for those around us.

Facebook Camera’s Save feature

The new Save to Facebook Camera feature creates a private tab in the Stories creation interface where you can access and post the imagery you’ve stored, and you’ll also find a Saved tab in your profile’s Photos section. Unlike Facebook’s discontinued Photo Sync feature, here you’ll choose to save imagery one at a time. It will be a big help to users lacking free space on their phone, as Facebook says many people around the world have to delete a photo just to save a new one.

Facebook wants to encourage people to invest more time decorating Stories, and learned that some people want to re-live or re-share their clips that expire after 24 hours. That’s why its built the Archive, a hedge against the potentially short-sighted trend of ephemerality.

On the team’s journey to India, they heard that photos and videos aren’t always the easiest way to share. If you’re camera-shy, have a low-quality camera, or don’t have cool scenes to capture, audio posts could get you sharing more. In fact, Facebook started testing voice clips as feed status updates in March. “With this week’s update, you will have options to add a voice message to a colorful background or a photo from your camera gallery or saved gallery. You can also add stickers, text, or doodles” says Hayes. With 22 official languages in India and over 100 spoken, recording voice can often be easier than typing.

Facebook Audio Stories

Some users will still hate Stories, which are getting more and more prominence atop Facebook’s feed. But Facebook can’t afford to retreat here. Stories are social media bedrock — the most full-screen and immersive content medium we can record and consume with just our phones. Facebook CEO Mark Zuckerberg himself said that Facebook must make sure that “ads are as good in Stories as they are in feeds. If we don’t do this well, then as more sharing shifts to Stories, that could hurt our business.” That means Facebook Stories needs India’s hundreds of millions of users.

There will always be room for text, yet if people want to achieve an emotional impact, they’ll eventually wade into Storytelling. But social networks must remember low-bandwidth users, or we’ll only get windows into the developed world.

For more on Facebook Stories, check out our recent coverage:

Source: Mobile – Techcruch

Parsable secures $40M investment to bring digital to industrial workers

Parsable secures M investment to bring digital to industrial workers

As we increasingly hear about automation, artificial intelligence and robots taking away industrial jobs, Parsable, a San Francisco-based startup sees a different reality, one with millions of workers who for the most part have been left behind when it comes to bringing digital transformation to their jobs.

Parsable has developed a Connected Worker platform to help bring high tech solutions to deskless industrial workers who have been working mostly with paper-based processes. Today, it announced a $40 million Series C cash injection to keep building on that idea.

The round was led by Future Fund with help from B37 and existing investors Lightspeed Venture Partners, Airbus Ventures and Aramco Ventures. Today’s investment brings the total to nearly $70 million.

The Parsable solution works on almost any smartphone or tablet and is designed to enter information while walking around in environments where a desktop PC or laptop simply wouldn’t be practical. That means being able to tap, swipe and select easily in a mobile context.

Photo: Parsable

The challenge the company faced was the perception these workers didn’t deal well with technology. Parsable CEO Lawrence Whittle says the company, which launched in 2013, took its time building its first product because it wanted to give industrial workers something they actually needed, not what engineers thought they needed. This meant a long period of primary research.

The company learned, it had to be dead simple to allow the industry vets who had been on the job for 25 or more years to feel comfortable using it out of the box, while also appealing to younger more tech-savvy workers. The goal was making it feel as familiar as Facebook or texting, common applications even older workers were used to using.

“What we are doing is getting rid of [paper] notebooks for quality, safety and maintenance and providing a digital guide on how to capture work with the objective of increasing efficiency, reducing safety incidents and increasing quality,” Whittle explained.

He likens this to the idea of putting a sensor on a machine, but instead they are putting that instrumentation into the hands of the human worker. “We are effectively putting a sensor on humans to give them connectivity and data to execute work in the same way as machines,” he says.

The company has also made the decision to make the platform flexible to add new technology over time. As an example they support smart glasses, which Whittle says accounts for about 10 percent of its business today. But the founders recognized that reality could change and they wanted to make the platform open enough to take on new technologies as they become available.

Today the company has 30 enterprise customers with 30,000 registered users on the platform. Customers include Ecolab, Schlumberger, Silgan and Shell. They have around 80 employees, but expect to hit 100 by the end of Q3 this year, Whittle says.

Source: Mobile – Techcruch

Index and Atomico back Teatime Games, a stealthy new startup from QuizUp founders

Index and Atomico back Teatime Games, a stealthy new startup from QuizUp founders

Teatime Games, a new Icelandic “social games” startup from the same team behind the hugely popular QuizUp (acquired in by Glu Mobile), is disclosing $9 million in funding, made up of seed and Series A rounds.

Index Ventures led both, but have been joined by Atomico, the European VC fund founded by Skype’s Niklas Zennström, for the $7.5 million Series A round. I understand this is the first time the two VC firms have done a Series A deal together in over a decade.

Both VCs have a decent track record in gaming. Index counts King, Roblox and Supercell as previous gaming investments, whilst Atomico also backed Supercell, along with Rovio, and most recently Bossa Studios.

As part of the round, Guzman Diaz of Index Ventures, Mattias Ljungman of Atomico, and David Helgason, founder of Unity, have joined the Teatime Games board of directors.

Meanwhile, Teatime Games is keeping shtum publicly on exactly what the stealthy startup is working on, except that it plays broadly in the social and mobile gaming space. In a call with co-founder and CEO Thor Fridriksson yesterday, he said a little more off the record and on condition that I don’t write about it yet.

What he was willing to describe publicly, however, is the general problem the company has set out to solve, which is how to make mobile games more social and personalised. Specifically, in a way that any social features — including communicating with friends and other players in real-time — enhances the gameplay rather than gets in its way or is simply bolted on as an adjunct to the game itself.

The company’s macro thesis is that games have always been inherently social throughout different eras (e.g. card games, board games, arcades, and consoles), and that most games truly come to life “through the interaction between people, opponents, and the audience”. However, in many respects this has been lost in the age of mobile gaming, which can feel like quite a solitary experience. That’s either because they are single player games or turn-based and played against invisible opponents.

Teatime plans to use the newly disclosed investment to double the size of its team in Iceland, with a particular focus on software engineers, and to further develop its social gaming offering for third party developers. Yes, that’s right, this is clearly a developer platform play, as much as anything else.

On that note, Atomico Partner Mattias Ljungman says the next “breakout opportunity” in games will see a move beyond individual studios and titles to what he describes as fundamental enabling technologies. Linked to this he argues that the next generation of games companies being developed will “become ever more mass market and socially connected”. You can read much more on Ljungman and Atomico’s gaming thesis in a blog post recently published by the VC firm.

Source: Mobile – Techcruch

Watch a laser-powered RoboFly flap its tiny wings

Watch a laser-powered RoboFly flap its tiny wings
Making something fly involves a lot of trade-offs. Bigger stuff can hold more fuel or batteries, but too big and the lift required is too much. Small stuff takes less lift to fly but might not hold a battery with enough energy to do so. Insect-sized drones have had that problem in the past — but now this RoboFly is taking its first flaps into the air… all thanks to the power of lasers.
We’ve seen bug-sized flying bots before, like the RoboBee, but as you can see it has wires attached to it that provide power. Batteries on board would weigh it down too much, so researchers have focused in the past on demonstrating that flight is possible in the first place at that scale.
But what if you could provide power externally without wires? That’s the idea behind the University of Washington’s RoboFly, a sort of spiritual successor to the RoboBee that gets its power from a laser trained on an attached photovoltaic cell.
“It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight,” said co-author of the paper describing the bot, Shyam Gollakota. He’s obviously very concerned with power efficiency — last month he and his colleagues published a way of transmitting video with 99 percent less power than usual.
There’s more than enough power in the laser to drive the robot’s wings; it gets adjusted to the correct voltage by an integrated circuit, and a microcontroller sends that power to the wings depending on what they need to do. Here it goes:

“To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction,” explained lead author Johannes James.
At present the bot just takes off, travels almost no distance and lands — but that’s just to prove the concept of a wirelessly powered robot insect (it isn’t obvious). The next steps are to improve onboard telemetry so it can control itself, and make a steered laser that can follow the little bug’s movements and continuously beam power in its direction.
The team is headed to Australia next week to present the RoboFly at the International Conference on Robotics and Automation in Brisbane.

Source: Gadgets – techcrunch

First CubeSats to travel the solar system snap ‘Pale Blue Dot’ homage

First CubeSats to travel the solar system snap ‘Pale Blue Dot’ homage
The InSight launch earlier this month had a couple of stowaways: a pair of tiny CubeSats that are already the farthest such tiny satellites have ever been from Earth — by a long shot. And one of them got a chance to snap a picture of their home planet as an homage to the Voyager mission’s famous “Pale Blue Dot.” It’s hardly as amazing a shot as the original, but it’s still cool.
The CubeSats, named MarCO-A and B, are an experiment to test the suitability of pint-size craft for exploration of the solar system; previously they have only ever been deployed into orbit.
That changed on May 5, when the InSight mission took off, with the MarCO twins detaching on a similar trajectory to the geology-focused Mars lander. It wasn’t long before they went farther than any CubeSat has gone before.

Citizen spacecraft builders literally race to the moon in NASA’s Cube Quest Challenge

A few days after launch MarCO-A and B were about a million kilometers (621,371 miles) from Earth, and it was time to unfold its high-gain antenna. A fisheye camera attached to the chassis had an eye on the process and took a picture to send back home to inform mission control that all was well.
But as a bonus (though not by accident — very few accidents happen on missions like this), Earth and the moon were in full view as MarCO-B took its antenna selfie. Here’s an annotated version of the one above:

“Consider it our homage to Voyager,” said JPL’s Andy Klesh in a news release. “CubeSats have never gone this far into space before, so it’s a big milestone. Both our CubeSats are healthy and functioning properly. We’re looking forward to seeing them travel even farther.”
So far it’s only good news and validation of the idea that cheap CubeSats could potentially be launched by the dozen to undertake minor science missions at a fraction of the cost of something like InSight.
Don’t expect any more snapshots from these guys, though. A JPL representative told me the cameras were really only included to make sure the antenna deployed properly. Really any pictures of Mars or other planets probably wouldn’t be worth looking at twice — these are utility cameras with fisheye lenses, not the special instruments that orbiters use to get those great planetary shots.
The MarCOs will pass by Mars at the same time that InSight is making its landing, and depending on how things go, they may even be able to pass on a little useful info to mission control while it happens. Tune in on November 26 for that!

Source: Gadgets – techcrunch