Google’s Wear OS gets a new look

Google’s Wear OS gets a new look
Wear OS, Google’s smartwatch operating system that was once called Android Wear, is getting a new look today. Google says the overall idea here is to give you quicker access to information and more proactive help. In line with the Google Fit redesign, Wear OS now also provides you with the same kind of health coaching as the Android app.
In practice, this means you can now swipe through multiple notifications at once, for example. Previously, you had to go from one notifications card to the next, which sound minor but was indeed a bit of a hassle. Like before, you bring up the new notifications feed by swiping up. If you want to reply or take any other action, you tap the notification to bring up those options.

Wear OS is also getting a bit of a Google Now replacement. Simply swipe right and the Google Assistant will bring up the weather, your flight status, hotel notifications or other imminent events. Like in most other Assistant-driven interfaces, Google will also use this area to help you discover other Assistant features like setting timers (though I think everybody knows how to use the Assistant to set a time given that I’m sure that’s 90% of Assistant usage right there).

As for Google Fit, it doesn’t come as a surprise that Wear OS is adapting the same circle design with Hear Points and Move Minutes as the Android app. On a round Wear OS watch, that design actually looks quite well.
While this obviously isn’t a major break from previous versions, we’re definitely talking about quality-of-life improvements here that do make using Wear OS just that little bit easier.

Source: Gadgets – techcrunch

SNES.party lets you play Super Nintendo with your friends

SNES.party lets you play Super Nintendo with your friends
Hot on the heels of the wonderful NES.party comes Haukur Rosinkranz’s SNES.party, a site that lets you play Super Nintendo with all your buds.
Rosinkranz is Icelandic but lives in Berlin now. He made NES.party a year ago while experimenting with WebRTC and WebSockets and he updated his software to support the SNES.
“The reason I made it was simply because I discovered how advanced the RTC implementation in Chrome had become and wanted to do something with it,” he said. “When I discovered that it’s possible to take a video element and stream it over the network I just knew I had to do something cool with this and I came up with the idea of streaming emulators.”
He said it took him six months to build the app and a month to add NES support.
“It’s hard to say how long it took because I basically created my own framework for web applications that need realtime communication between one or more participants,” he said. He is a freelance programmer.
It’s a clever hack that could add a little fun to your otherwise dismal day. Feel like a little Link to the Past? Pop over here and let’s play!

Source: Gadgets – techcrunch

Musical.ly investor bets on internet radio with $17M deal for Korea’s Spoon Radio

Musical.ly investor bets on internet radio with M deal for Korea’s Spoon Radio

One of the early backers of Musical.ly, the short video app that was acquired for $1 billion, is making a major bet that internet radio is one of the next big trends in media.

Goodwater Capital, one of a number of backers that won big when ByteDance acquired Musical.ly last year, has joined forces with Korean duo Softbank Ventures and KB Investment to invest $17 million into Korea’s Spoon Radio. The deal is a Series B for parent company Mykoon, which operates Spoon Radio and previously developed an unsuccessful smartphone battery sharing service.

That’s much like Musical.ly, which famously pivoted to a karaoke app after failing to build an education service.

“We decided to create a service, now known as Spoon Radio, that was inspired by what gave us hope when [previous venture] ‘Plugger’ failed to take off. We wanted to create a service that allowed people to truly connect and share their thoughts with others on everyday, real-life issues like the ups and downs of personal relationships, money, and work.

“Unlike Facebook and Instagram where people pretend to have perfect lives, we wanted to create an accessible space for people to find and interact with influencers that they could relate with on a real and personal level through an audio and pseudo-anonymous format,” Mykoon CEO Neil Choi told TechCrunch via email.

Choi started the company in 2013 with fellow co-founders Choi Hyuk jun and Hee-jae Lee, and today Spoon Radio operates much like an internet radio station.

Users can tune in to talk show or music DJs, and leave comments and make requests in real-time. The service also allows users to broadcast themselves and, like live-streaming, broadcasters — or DJs, as they are called — can monetize by receiving stickers and other virtual gifts from their audience.

Spoon Radio claims 2.5 million downloads and “tens of millions” of audio broadcasts uploaded each day. Most of that userbase is in Korea, but the company said it is seeing growth in markets like Japan, Indonesia and Vietnam. In response to that growth — which Choi said is over 1,000 percent year-on-year — this funding will be used to invest in expanding the service in Southeast Asia, the rest of Asia and beyond.

Audio social media isn’t a new concept.

Singapore’s Bubble Motion raised close to $40 million from investors but it was sold in an underwhelming and undisclosed deal in 2014. Reportedly that was after the firm had failed to find a buyer and been ready to liquidate its assets. Altruist, the India-based mobile services company that bought Bubble Motion has done little to the service. Most changes have been bug fixes and the iOS app, for example, has not been updated for nearly a year.

Things have changed in the last four years, with smartphone growth surging across Asia and worldwide. That could mean different fortunes but there are also differences between the two in terms of strategy.

Bubbly was run like a social network — a ‘Twitter for voice’ — whereas Spoon Radio is focused on a consumption-based model that, as the name suggests, mirrors traditional radio.

“This is mobile consumer internet at its best,” Eric Kim, one of Goodwater Capital’s two founding partners, told TechCrunch in an interview. “Spoon Radio is taking an offline experience that exists in classic radio and making it even better.”

Kim admitted that when he first used the service he didn’t see the appeal — he claimed the same was true for Musical.ly — but he said he changed his tune after talking to listeners and using Spoon Radio. He said it reminded him of being a kid growing up in the U.S. and listening to radio shows avidly.

“It’s a really interesting phenomenon taking off in Asia because of smartphone growth and people being keen for content, but not always able to get video content. It was a net new behavior that we’d never seen before… Musical.ly was in the same bracket as net new content for the new generation, we’ve been paying attention to this category broadly,” Kim — whose firm’s other Korean investments include chat app giant Kakao and fintech startup Toss — explained.

Source: Mobile – Techcruch

Say hello to Android 9 Pie

Say hello to Android 9 Pie

The nickname for Android 9 is “Pie.” It’s not the most inspired of Android names, but it’ll do. What really matters at the end of the day are the new features in Pie — and there are plenty of those.

If you are a Pixel owner, you’ll be happy to hear that Pie will start rolling out as an over-the-air update today. The same goes for every other device that was enrolled in the Android Beta (that includes any Sony Mobile, Xiaomi, HMD Global, Oppo, Vivo, OnePlus and Essential devices that got the betas) and qualifying Android One devices. Everybody else, well, you know the drill. Wait until your manufacturer launches it for you… which should be the end of the year for some — and never for quite a few others.

Overall, Pie is a solid upgrade. The only real disappointment here is that Pie won’t launch with Android’s new digital wellness features by default. Instead, you’ll have to sign up for a beta and own a Pixel device. That’s because these new features won’t officially launch until the fall (Google’s hardware event, which traditionally happens in early October, seems like a good bet for the date).

Let’s talk about the features you’ll get when you update to Android 9 Pie, though. The most obvious sign that you have updated to the new version is the new system navigation bar, which replaces the standard three-icon navigation bar that has served Android users well for the last couple of iterations. The new navigation bar replaces the three icons (back, home, overview) that are virtually always on screen with a more adaptive system and a home button that now lets you swipe to switch between apps (instead of tapping on the overview button). You can also now swipe up on the home button and see full-screen previews of the apps you used recently, as well as the names of a few apps that Google thinks you’ll want to use. A second up-swipe and you get to the usual list of all of your installed apps.

In day-to-day use, I’m not yet 100 percent convinced that this new system is any better than the old one. Maybe I just don’t like change, but the whole swiping thing does not strike me as very efficient, and if you leave your finger on the home button for a split-second longer than Google expects, it’ll launch the Assistant instead of letting you swipe between apps. You get used to it, though, and you can get back to the old system if you want to.

Google’s suggestions for apps you’ll like and want to use when you swipe up feel like a nice tech demo but aren’t all that useful in day-to-day use. I’m sure Google uses some kind of machine learning to power these suggestions, but I’d rather use that area as an extended favorites bar where I can pin a few additional apps. It’s not that Android’s suggestions were necessarily wrong and that these weren’t apps I wanted to use, it’s mostly that the apps it suggested were already on my home screen anyway. I don’t think I ever started an app from there while using the last two betas.

But that’s enough grumbling, because it’s actually all of the little things that make Android 9 Pie better. There’s stuff like the adaptive battery management, which makes your battery last longer by learning which apps you use the most. And that’s great (though I’m not sure how much influence it has had on my daily battery life), but the new feature that actually made me smile was a new popup that tells you that you have maybe 20 percent of battery left and that this charge should last until 9:20pm. That’s actually useful.

Google also loves to talk about its Adaptive Brightness feature that also learns about how you like your screen brightness based on your surroundings, but what actually made a difference for me was that Google now blends out the whole settings drawer when you change the setting so that you can actually see what difference those changes make. It’s also nice to have the volume slider pop up right next to the volume buttons now.

Talking about sound: Your phone now plays a pleasant little sound when you plug in the charger. It’s the little things that matter, after all.

The other new machine learning-powered feature is the smart text selection tool that recognizes the meaning of the text you selected and then allows you to suggest relevant actions like opening Google Maps or bringing up the share dialog for an address. It’s nifty when it works, but here, too, what actually makes the real difference in daily usage is that the text selection magnifier shows you a larger, clearer picture of what you’re selecting (and it sits right on top of what you are selecting), which makes it far easier to pick the right text (and yes, iOS pretty much does the same thing).

And now we get to the part where I wish I could tell you all about the flagship Digital Wellness features in Pie (because pie and wellness go together like Gwyneth Paltrow and jade eggs), but we’ll have to wait a few days for that. Here’s what we know will be available: a dashboard for seeing where you spend time on your device; an app timer that lets you set limits on how long you can use Instagram, for example, and then grays out the icon of that app; and a Wind Down feature that switches on the night-light mode, turns on Do Not Disturb and fades the screen to grayscale before it’s bedtime.

The one wellness feature you can try now if you are on Pie already is the new Do Not Disturb tool that lets you turn off all visual interruptions. To try out everything else, you’ll have to sign up for the beta here.

Another feature that’s only launching in the fall is “slices” (like slices of pie…). I was looking forward to this one as it’ll allow developers to highlight parts of their apps (maybe to start playing a song or hail a car) in the Android Pie search bar when warranted. Maybe Google wasn’t ready yet — or maybe its partners just hadn’t built enough slices yet, but either way, we won’t see these pop up in Android Pie until later this year.

And that’s Android 9 Pie. It’s a nice update for sure, and while Google loves to talk about all of the machine learning and intelligence it’s baking into Android, at the end of the day, it’s the small quality of life changes that actually make the biggest difference.

Source: Mobile – Techcruch

Opera adds a crypto wallet to its mobile browser

Opera adds a crypto wallet to its mobile browser

The Opera Android browser will soon be able to hold your cryptocurrencies. The system, now in beta, lets you store crypto and ERC20 tokens in your browser, send and receive crypto on the fly, and secures your wallet with your phone’s biometric security or passcode.

You can sign up to try the beta here.

The feature, called Crypto Wallet, “makes Opera the first major browser to introduce a built-in crypto wallet” according to the company. The feature could allow for micropayments in the browser and paves the way for similar features in other browsers.

From the release:

We believe the web of today will be the interface to the decentralized web of tomorrow. This is why we have chosen to use our browser to bridge the gap. We think that with a built-in crypto wallet, the browser has the potential to renew and extend its important role as a tool to access information, make transactions online and manage users’ online identity in a way that gives them more control.

In addition to being able to send money from wallet to wallet and interact with Dapps, Opera now supports online payments with cryptocurrency where merchants support exists. Users that choose to pay for their order using cryptocurrency on Coinbase Commerce-enabled merchants will be presented with a payment request dialog, asking them for their signature. The payment will then be signed and transmitted directly from the browser.

While it’s still early days for this sort of technology it’s interesting to see a mainstream browser entering the space. Don’t hold your breath on seeing crypto in Safari or Edge but Chrome and other “open source” browsers could easily add these features given enough demand.

Source: Mobile – Techcruch

You can now stream to your Sonos devices via AirPlay 2

You can now stream to your Sonos devices via AirPlay 2
Newer Sonos devices and “rooms” now appear as AirPlay 2-compatible devices, allowing you to stream audio to them via Apple devices. The solution is a long time coming for Sonos which promised AirPlay 2 support in October.
You can stream to Sonos One, Sonos Beam, Playbase, and Play:5 speakers and ask Siri to play music on various speakers (“Hey Siri, play some hip-hop in the kitchen.”) The feature should roll out to current speakers today.
I tried a beta version and it worked as advertised. A set of speakers including a Beam and a Sub in my family room showed up as a single speaker and a Sonos One in the kitchen showed up as another. I was able to stream music and podcasts to either one.
Given the ease with which you can now stream to nearly every device from every device it’s clear that whole-home audio is progressing rapidly. As we noted before Sonos is facing tough competition but little tricks like this one help it stay in the race.

Source: Gadgets – techcrunch

Apple is rebuilding Maps from the ground up

Apple is rebuilding Maps from the ground up

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google .

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”

Source: Mobile – Techcruch

Some low-cost Android phones shipped with malware built in

Some low-cost Android phones shipped with malware built in

Avast has found that many low-cost, non-Google-certifed Android phones shipped with a strain of malware built in that could send users to download apps they didn’t intend to access. The malware, called called Cosiloon, overlays advertisements over the operating system in order to promote apps or even trick users into downloading apps. Devices effected shipped from ZTE, Archos and myPhone.

The app consists of a dropper and a payload. “The dropper is a small application with no obfuscation, located on the /system partition of affected devices. The app is completely passive, only visible to the user in the list of system applications under ‘settings.’ We have seen the dropper with two different names, ‘CrashService’ and ‘ImeMess,’” wrote Avast. The dropper then connects with a website to grab the payloads that the hackers wish to install on the phone. “The XML manifest contains information about what to download, which services to start and contains a whitelist programmed to potentially exclude specific countries and devices from infection. However, we’ve never seen the country whitelist used, and just a few devices were whitelisted in early versions. Currently, no countries or devices are whitelisted. The entire Cosiloon URL is hardcoded in the APK.”

The dropper is part of the system’s firmware and is not easily removed.

To summarize:

The dropper can install application packages defined by the manifest downloaded via an unencrypted HTTP connection without the user’s consent or knowledge.
The dropper is preinstalled somewhere in the supply chain, by the manufacturer, OEM or carrier.
The user cannot remove the dropper, because it is a system application, part of the device’s firmware.

Avast can detect and remove the payloads and they recommend following these instructions to disable the dropper. If the dropper spots antivirus software on your phone it will actually stop notifications but it will still recommend downloads as you browse in your default browser, a gateway to grabbing more (and worse) malware. Engadget notes that this vector is similar to the Lenovo “Superfish” exploit that shipped thousands of computers with malware built in.

Source: Mobile – Techcruch

GoPro launches TradeUp program to swap old cameras for discounts

GoPro launches TradeUp program to swap old cameras for discounts
GoPro is willing to take that old digital camera stuffed in your junk drawer even if it’s not a GoPro. Through a program called TradeUp, the camera company will discount the GoPro H6 Black $50 and Fusion $100 when buyers trade-in any digital camera. The company tried this last year for 60 days, but as of right now, GoPro is saying this offer does not expire.
This offer works with any digital camera, including old GoPros. It clearly addresses something we noticed years ago — there’s often little reason to buy a new GoPro because their past products were so good.
GoPro tried this in 2017 for 60 days and says 12,000 customers took advantage of the program.
The service is reminiscent of what wireless carries do to encourage smartphone owners to buy new phones. It’s a clever solution, though other options could net more money. Users could sell their camera on eBay or use other trade-in programs. Best Buy lets buyers trade-in old cameras, too, and currently gives $60 for a GoPro Hero3+ Black and $55 for a HD Hero 960.
GoPro is in a tough position, and this is clearly a plan to spur sales. The company’s stock is trading around an all-time low after a brief upswing following a report that Chinese electronic maker Xiaomi was considering buying the company. The company also recently started licensing its camera technology and trimmed its product line, while introducing a new, $200 camera.

Source: Gadgets – techcrunch

StudioBricks is a Barcelona-based startup that sends you a studio in a box

StudioBricks is a Barcelona-based startup that sends you a studio in a box
My friend Rick is a voiceover artist and works in Ohio – right along the flight path for jets taking off and landing at the Columbus airport. As a result, he said, he had to record late at night when the airport closed, a limitation that he found exasperating.
Enter StudioBricks, a cool startup from Barcelona. Founded by Guillermo Jungbauer, the small company makes and sells soundproof studios that click together like LEGO. The company started in 2008 and created a USA subsidy in 2014.

StudioBricks aren’t cheap. Rick paid $9,940 for his including almost $2,000 shipping. However, he said, it’s been a life-saver.
“The Studiobricks sound isolation booths are designed to be incredibly fast and easy to install without compromising the booths excellent sound isolating properties,” said Jungbauer. “This is achieved thanks to its modular panels which are built of high performance sound isolating materials and can simply be slotted together.”
The company sold 1,053 cabins in 207 and they’re on track to keep growing.

“About ten years ago I created the first booth as rehearsal space out of his own need as saxophonist,” said Jungbauer. “I developed the first bricks with acoustic engineers already having in mind the market possibilities.”
The system includes a ventilation system, a heavy, sound-proof door, and solid, sound-proofed wall panels. Rick, in his long build post, found it easy build and quite effective at keeping the plane noise at bay.
“From the beginning on Studiobricks aims to be eco-friendly. We are in a continuous process of improvement and have a strong commitment with the environment,” said Jungbauer. “That means that both, on an organisational level and product level we are improving continuously our processes and product considering the best options regarding the environment. For example years ago we changed our lacquer to a water based one. Our plant is the first and right now only in Spain using a biomass based central heating boiler.”
It’s cool to see a small European company selling a niche product gain such success. Because the company solves a notoriously difficult and wildly frustrating problem they are getting all the organic traction they need to keep going. Given the rise of corporate podcasting and other recording needs, a system like StudioBricks makes perfect sense. Considering it can be put together by two people in a few hours it is almost like the Ikea of vocal studios – compact, easy to build, and incredibly useful.
And now Rick doesn’t have to worry about the Delta flight from JFK intruding on his audio book reading session. Ganar-ganar, as they say in Barcelona.

Source: Gadgets – techcrunch