An ode to Apple’s awful MacBook keyboard

An ode to Apple’s awful MacBook keyboard
Yes I am very late to this. But I am also very annoyed so I am adding my voice to the now sustained chorus of complaints about Apple’s redesigned Mac keyboard: How very much it sucks. Truly, madly, deeply.
This is the keyboard that Apple “completely redesigned” in 2015, in its quest for size zero hardware, switching from a scissor mechanism for the keys to what it described then as the “new Apple-designed butterfly mechanism” — touting this as 40% thinner and 4x more stable.
Reader, there is nothing remotely beautiful and butterfly-esque about the experience of depressing these keys. Scattershot staccato clattering, as your fingers are simultaneously sucked in and involuntarily hammer out a grapeshot of key strikes, is what actually happens. It’s brutalist and unforgiving. Most egregiously it’s not reliably functional.
The redesigned mechanism has resulted in keys that not only feel different when pressed vs the prior MacBook keyboard — which was more spongey for sure but that meant keys were at reduced risk of generating accidental strikes vs their barely-there trigger-sensitive replacements (which feel like they have a 40% smaller margin for keystrike error) — but have also turned out to be fail prone, as particles of dust can find their way in between the keys, as dust is wont to do, and mess with the smooth functioning of key presses — requiring an official Apple repair.
Yes, just a bit of dust! Move over ‘the princess and the pea’: Apple and the dust mote is here! ‘Just use it in a vacuum’ shouldn’t be an acceptable usability requirement for a very expensive laptop.
Apple has also had to make these keyboards quieter. Because, as I say, the act of using the keyboard results in audible clackclackery. It’s like mobile phone keyclicks suddenly got dizzingly back in fashion. (Or, well, Apple designers got to overindulge their blue-sky thinking around the idea that ‘in space no one can hear you type’.)
Several colleagues have garnered dagger glances and been told to dial it down at conferences on account of all the key clattering as they worked. Yet a keyboard is made for working. It’s a writing tool. Or it should be. Instead, Apple has made a keyboard for making audible typos. It’s shockingly bad.
As design snafus go, this is up there with antenna-gate. Except actually it’s much worst. You can’t not ‘hold it in that way’. You can’t press keys on a keyboard radically differently. I guess you could type really slowly to try to avoid making all these high speed typos. But that would have an obvious impact on your ability to work by slowing down your ability to write. So, again, an abject mess.
I’ve only had this Oath-issued 2017 MacBook Pro (in long-held-off exchange for my trusty MacBook Air, whose admittedly grimy and paint-worn keys were nonetheless 100% functional after years of writerly service) for about a month but the keys appear to have a will of their own, whipping themselves into a possessive frenzy almost every time they’re pressed, and spewing out all manner of odd typos, mis-strikes and mistakes.
This demonic keyboard has summoned Siri unasked. (Thanks stupidly pointless Touch Bar!)  It has also somehow nearly delivered an ‘I’m not interested’ auto-response to a stranger who wrote me at length on LinkedIn to thoughtfully thank me for an earlier article. (Fortunately I didn’t have auto-send enabled so I could catch that unintended slapdown in the act before it was delivered. No thanks to the technologies involved.)
At the same time Caps Lock routinely fails to engage when pressed, as if it’s practising for when it’ll be broken. It equally countlessly fails to disengage when re-pressed. ‘Craps Out Lock’ more like. I fear it’s beset by dust motes already. Which is hard to avoid because, y’know, everything in the world is made of dust.
The keyboard also frustrates because of the jarring juxtaposition of having individual keys that depress too willingly, seeming to suck the typos from your fingers as letters get snatched out of sequence (and even whole words coaxed out of line), coupled with a backspace key that refuses to perform quickly enough (I’ve had to crank it right up to the very fastest setting) so it can’t gobble up the multiple erroneous strikes quickly enough to edit out all the BS the keyboard is continually spewing.
The result? A laptop that’s lightning quick at creating a typo-ridden mess, and slow as hell to clean it up.
In short, it’s a mess. A horrible mess that makes a mockery of the Apple catchphrase of yore (‘it just works’) by actively degrading the productivity of writing — interrupting your work with pointless sound and an alphabetic soup of fury.
The redesigned keyboard has been denounced by Apple loyalists such as John Gruber — who in April called it “one of the biggest design screwups in Apple history“.
He precision-hammered his point home with this second economical sentence: “Everyone who buys a MacBook depends upon the keyboard and this keyboard is undependable.”
Though it was Casey Johnson, writing for The Outline, who raised the profile of the problem last year, kicking up a major stink over her MacBook keys acting up (or dead) after a brush with invisible dust.
Since then keyboard-related problems have garnered Apple at least one class action lawsuit.
Meanwhile, the company has responded to this hardware headache of its own design like the proverbial thief in the night, quietly fiddling with the internals when no one was looking. Most notably it slotted in a repair earlier this year, when it added a sort of silicon gum shield to wrap the offending butterfly mechanism, which is presumably supposed to prevent dust from wreaking its terribly quotidian havoc. (Though it’s no use to me, right here, right now, with my corporate provisioned 2017 MBP.)
We know this thanks to the excellent work done by iFixit this summer, when it took apart one of Apple’s redesigned redesigned keyboards and found a thin rubberized film had been added under the keycaps. (Looking at this translucent addition, I am reminded of Alien designer HR Giger’s biomechanical concoctions. And of Ash’s robotic hard-on for poking around inside the disemboweled facehugger. But I digress.)
Shamelessly Apple tried to sell this tweak to journalists as solely a fix for those noisy key clicks. iFixit was not at all convinced.
“This flexible enclosure is quite obviously an ingress-proofing measure to cover up the mechanism from the daily onslaught of microscopic dust. Not — to our eyes — a silencing measure,” it wrote in July. “In fact, Apple has a patent for this exact tech designed to “prevent and/or alleviate contaminant ingress.”
And the date on Apple’s ingress-proofing key-cap condom patent? September 8, 2016. Read that and weep, MacBook Pro second-half 2016, 2017 and first half 2018 owners.
So if, like me, you’re saddled with a 2017 (or earlier) MBP there’s sweet F.A. you can do about this fatal design flaw in the core interfacing mechanism you must daily touch. Abstention is not an option. We must typo and wait for the inexorable, dust-based doom to strike the space bar or the ‘E’ key — which will then make the typing experience even more miserable (and require a trip to an Apple store to swaddle the misbehaving keys in rubber — leaving us computerless, most probably, in the meanwhile).
There is an entire novel written without the letter E. I propose that Apple’s failed keyboard redesign be christened the ‘Gadsby‘ in its honor — because, ye gads, it’s awful.
This is especially, especially frustrating because the MacBook Air keyboard was so very, very good.
Not good — it was great. It was as close to typing perfection I’ve come across in a computer. And I’ve been typing on keyboards for a very long time.
Why mess with such a good thing?! Marginally thinner than what was already exceptionally thin hardware is hardly something consumers clamour for.
People are far more interested in having the thing they bought and/or use actually doing the job they need it for. And definitely not letting them down.
(Or “defienmtely nort letting them down” as the keyboard just reworked the line. I really should have saved every typo and posted a mutant mirror text beneath this one, containing all the thousands of organic instances of ‘found poetry’ churned out by the keyboard’s inner life/poet/drunk.)
If shaving 40% off the profile of the key mechanism transforms an incredible reliable keyboard into a dust-prone, typo-spewing monster that’s not progress; it’s folly of the highest order.
Offering free repairs to affected users, as Apple finally did in June, doesn’t even begin to fix this fuck up.
Not least because that’s only a fix for dust-based death; There isn’t a rubber film in the universe that could make typing on these keys a pleasing experience.
What does it tell us when a company starts making the quality of its premium products worse? Especially a company famed for high-end design and high quality hardware? (Moreover, a company now worth a staggering $1tr+ in market capitalization?)
It smacks of complacency, misaligned priorities and worrying blindspots — at the very least, if not a wider lack of perspective outside the donut-shaped mothership. (Perhaps there’s been a little too much gathering around indoors in Cupertino lately, and not enough looking out critically at a flaking user experience… )
Or else, well, it smacks of cynical profiteering.
Clearly it’s not a good look. Apple’s reputation rests in large part on its hardware being perceived as reliable. On the famous Steve Jobs’ sales pitch that ‘it just works’. So Apple designing a keyboard that’s great at breaking for no reason at all and lighting fast at churning out typos is a truly epic fail.
Of course consumer electronic designs won’t always work out. Some failure is to be expected — and will be understood. But what makes the keyboard situation so much worse is Apple’s failure to recognise and accept the problem so that it could promptly clean up the mess.
Its apparent inability (for so long) to acknowledge there even was a problem is a particularly worrying sign. Having to sneak in a late fix because you didn’t have the courage to publicly admit you screwed up is not a good look for any company — let alone a company with such a long, rich and storied history as Apple.
More cynical folks out there might whisper it’s design flaw by design; A strategic fault-line intended to push users towards an upgrade faster than they might have otherwise have unzipped their wallets. Though Apple offering free keyboard repairs (also, albeit, tardily) contradicts that conspiracy theory.
Yet the notion of ‘built in obsolescence’ persists where consumer computing hardware is concerned, given how corporate profits do tend to be locked to upgrade cycles.
In Apple’s case it’s an easy charge to level at the company given its business model is still, in very large part, driven by hardware sales. So Apple doing anything that risks encouraging consumers to feel it’s intentionally making its products worse is also folly of the highest order.
Apple does have some active accusations to deal with on that front too. For example, a consumer group filed a complaint of planned obsolescence in France late last year — on account of Apple performance throttling older iPhones — something the company has faced multiple complaints over and some regulatory scrutiny. So again, it really needs to tread carefully.
Tim Cook’s Apple cannot afford to be slipshod in its designs nor its communication. Jobs got more latitude on the latter front because he was such a charismatic persona. Cook is lots of good things but he’s not that; he’s closer to ‘safe pair of hands’ — so company comms should really reflect that.
Apple may be richer than Croesus and king of the premium heap but it can’t risk tarnishing the brand. The mobile space is littered with the toppled monuments of past giants. And the markets where Apple plays are increasingly fiercely fought. Chinese device makers especially are building momentum with lower priced and highly capable consumer hardware. (Huawei displaced Apple in second place in the global smartphone rankings in Q2, for example).
Apple’s rivals have mercilessly cloned its slender laptop designs and copypasted the look and feel of the iPhone. Reliability and usability are the bedrock of the price premium its brand commands, with privacy a more recent bolt-on. So failing on those fundamentals would be beyond foolish, with so many rivals now pushing cheaper priced yet very similarly packaged (and shiny) alternatives at consumers — which also often offer equal or even greater feature utility for less money (assuming you’re willing to compromise on privacy).
When it comes to the Mac specifically, it clearly has not been Apple’s priority for a long time. The iPhone has been its star performer of the past decade, while growing its services business is the fresh focus for Cook. Yet when Cook’s Apple has paid a little attention to the Mac category it’s often been to fiddle unnecessarily — such as by clumsily reworking a great keyboard for purely cosmetic reasons, or to add a silly strip of touchscreen that’s at best distracting and (in my experience) just serves up even more unwanted keystrikes. So thrice blighted and the opposite of useful: A fiddly gimmick.
This is worrying.
Apple is a company founded with the word ‘Computer’ in its name. Computing is its DNA. And, even now, while smartphones and tablets are great for lots of things they are not great for sustained writing. For writing — and indeed working — at any length a laptop remains the perfect tool.
There’s no touchscreen in the world that can beat a well-designed keyboard for speed, comfort and typing convenience. To a writer, using a great keyboard almost feels like flying.
You wouldn’t have had to explain that to Jobs. He honed his Mac sales pitch to the point of poetry — famously dubbing the Mac a ‘bicycle for the mind’.
Now, sadly, saddled with this flatfooted and frustratingly flawed mechanic, it’s like Apple shipped a bicycle with a pair of needles where the pedals should be.
Not so much thinking different as failing to understand what the machine is for.

Source: Gadgets – techcrunch

Scientists make a touch tablet that rolls and scrolls

Scientists make a touch tablet that rolls and scrolls

Research scientists at Queen’s University’s Human Media Lab have built a prototype touchscreen device that’s neither smartphone nor tablet but kind of both — and more besides. The device, which they’ve christened the MagicScroll, is inspired by ancient (papyrus/paper/parchment) scrolls so it takes a rolled-up, cylindrical form factor — enabled by a flexible 7.5inch touchscreen housed in the casing.

This novel form factor, which they made using 3D printing, means the device can be used like an erstwhile Rolodex (remember those?!) for flipping through on-screen contacts quickly by turning a physical rotary wheel built into the edge of the device. (They’ve actually added one on each end.)

Then, when more information or a deeper dive is required, the user is able to pop the screen out of the casing to expand the visible display real estate. The flexible screen on the prototype has a resolution of 2K. So more mid-tier mobile phone of yore than crisp iPhone Retina display at this nascent stage.

 

 

The scientists also reckon the scroll form factor offers a pleasing ergonomically option for making actual phone calls too, given that a rolled up scroll can sit snugly against the face.

Though they admit their prototype is still rather large at this stage — albeit, that just adds to the delightfully retro feel of the thing, making it come over like a massive mobile phone of the 1980s. Like the classic Motorola 8000X Dynatac of 1984.

While still bulky at this R&D stage, the team argues the cylindrical, flexible screen form factor of their prototype offers advantages by being lightweight and easier to hold with one hand than a traditional tablet device, such as an iPad. And when rolled up they point out it can also fit in a pocket. (Albeit, a large one.)

They also imagine it being used as a dictation device or pointing device, as well as a voice phone. And the prototype includes a camera — which allows the device to be controlled using gestures, similar to Nintendo’s ‘Wiimote’ gesture system.

In another fun twist they’ve added robotic actuators to the rotary wheels so the scroll can physically move or spin in place in various scenarios, such as when it receives a notification. Clocky eat your heart out.

“We were inspired by the design of ancient scrolls because their form allows for a more natural, uninterrupted experience of long visual timelines,” said Roel Vertegaal, professor of human-computer interaction and director of the lab, in a statement.

“Another source of inspiration was the old Rolodex filing systems that were used to store and browse contact cards. The MagicScroll’s scroll wheel allows for infinite scroll action for quick browsing through long lists. Unfolding the scroll is a tangible experience that gives a full screen view of the selected item. Picture browsing through your Instagram timeline, messages or LinkedIn contacts this way!”

“Eventually, our hope is to design the device so that it can even roll into something as small as a pen that you could carry in your shirt pocket,” he added. “More broadly, the MagicScroll project is also allowing us to further examine notions that ‘screens don’t have to be flat’ and ‘anything can become a screen’. Whether it’s a reusable cup made of an interactive screen on which you can select your order before arriving at a coffee-filling kiosk, or a display on your clothes, we’re exploring how objects can become the apps.”

The team has made a video showing the prototype in action (embedded below), and will be presenting the project at the MobileHCI conference on Human-Computer Interaction in Barcelona next month.

While any kind of mobile device resembling the MagicScroll is clearly very, very far off even a sniff of commercialization (especially as these sorts of concept devices have long been teased by mobile device firms’ R&D labs — while the companies keep pumping out identikit rectangles of touch-sensitive glass… ), it’s worth noting that Samsung has been slated to be working on a smartphone with a foldable screen for some years now. And, according to the most recent chatter about this rumor, it might be released next year. Or, well, it still might not.

But whether Samsung’s definition of ‘foldable’ will translate into something as flexibly bendy as the MagicScroll prototype is highly, highly doubtful. A fused clamshell design — where two flat screens could be opened to seamlessly expand them and closed up again to shrink the device footprint for pocketability — seems a much more likely choice for Samsung designers to make, given the obvious commercial challenges of selling a device with a transforming form factor that’s also robust enough to withstand everyday consumer use and abuse.

Add to that, for all the visual fun of these things, it’s not clear that consumers would be inspired to adopt anything so different en masse. Sophisticated (and inevitably) fiddly devices are more likely to appeal to specific niche use cases and user scenarios.

For the mainstream six inches of touch-sensitive (and flat) glass seems to do the trick.

Source: Mobile – Techcruch

Huge leak shows off the new iPhone XS

Huge leak shows off the new iPhone XS

Get ready for a leaked look at the new iPhone XS. 9to5Mac has gotten its hands on an image of Apple’s next generation of iPhone hardware, and the future looks pretty swanky.

The leaked image showcases the new sizing of Apple’s soon-to-be-unveiled flagship bezel-less devices, which likely will have 5.8-inch and 6.5-inch screens, respectively. The phones will be called the iPhone XS, according to the report. The pictured devices represent the higher-end OLED screen models, not the cheaper rumored notch LCD iPhone.

The device will feature a new gold color shell. The iPhone X is currently available in space gray and silver.

Image credit: 9to5mac

A picture is worth a thousand words, but there are still a lot of details we’re waiting on here obviously. Apple is expected to show off the new phone hardware as well as a new version of the Apple Watch at a hardware event on September 12.

Source: Mobile – Techcruch

The Google Assistant is now bilingual 

The Google Assistant is now bilingual 
The Google Assistant just got more useful for multilingual families. Starting today, you’ll be able to set up two languages in the Google Home app and the Assistant on your phone and Google Home will then happily react to your commands in both English and Spanish, for example.
Today’s announcement doesn’t exactly come as a surprise, given that Google announced at its I/O developer conference earlier this year that it was working on this feature. It’s nice to see that this year, Google is rolling out its I/O announcements well before next year’s event. That hasn’t always been the case in the past.
Currently, the Assistant is only bilingual and it still has a few languages to learn. But for the time being, you’ll be able to set up any language pair that includes English, German, French, Spanish, Italian and Japanese. More pairs are coming in the future and Google also says it is working on trilingual support, too.
Google tells me this feature will work with all Assistant surfaces that support the languages you have selected. That’s basically all phones and smart speakers with the Assistant, but not the new smart displays, as they only support English right now.

While this may sound like an easy feature to implement, Google notes this was a multi-year effort. To build a system like this, you have to be able to identify multiple languages, understand them and then make sure you present the right experience to the user. And you have to do all of this within a few seconds.
Google says its language identification model (LangID) can now distinguish between 2,000 language pairs. With that in place, the company’s researchers then had to build a system that could turn spoken queries into actionable results in all supported languages. “When the user stops speaking, the model has not only determined what language was being spoken, but also what was said,” Google’s VP Johan Schalkwyk and Google Speech engineer Lopez Moreno write in today’s announcement. “Of course, this process requires a sophisticated architecture that comes with an increased processing cost and the possibility of introducing unnecessary latency.”

If you are in Germany, France or the U.K., you’ll now also be able to use the bilingual assistant on a Google Home Max. That high-end version of the Google Home family is going on sale in those countries today.
In addition, Google also today announced that a number of new devices will soon support the Assistant, including the tado° thermostats, a number of new security and smart home hubs (though not, of course, Amazon’s own Ring Alarm), smart bulbs and appliances, including the iRobot Roomba 980, 896 and 676 vacuums. Who wants to have to push a button on a vacuum, after all.

Source: Gadgets – techcrunch

LG is releasing an Android One handset with near flagship specs

LG is releasing an Android One handset with near flagship specs

Android One is one of a handful of Google projects aimed at helping the mobile operating system run better on entry level devices. As such, those handsets that qualify for the program are generally pretty middling, at best.

But LG’s G7 One bucks the trend, with some specs that wouldn’t be out of place on a 2018 flagship. Leading the way is the Snapdragon 845, Qualcomm’s top of the line processor, coupled with a 6.1 inch QHD+ display and a 3,000mAh battery. There’s also that familiar notch up top design that’s all the rage on flagships these days.

There are certain cost cutting measures. The bleeding edge dual camera tech that LG prides itself on isn’t on board here. The 4GB of RAM and 32GB of storage are not great, but perfectly acceptable for most. The headphone jack is still in place — which is a good thing for a budget device — it’s silly to expect users to have to factor in the price of bluetooth headphones.

The handset will be debuting at IFA in Berlin this week. Price is still TBD, but LG promises an “exceptional” one. At the very least, that should mean it comes in well under the company’s flagships.

If LG is able to offer up something truly exception from a price perspective, it could be the thing the company needs to help stand out in a smartphone race that has largely left it behind. It’s a strategy that has worked well for OnePlus, and LG could certainly use the hook.

Source: Mobile – Techcruch

The Automatica automates pour-over coffee in a charming and totally unnecessary way

The Automatica automates pour-over coffee in a charming and totally unnecessary way
Most mornings, after sifting through the night’s mail haul and skimming the headlines, I make myself a cup of coffee. I use a simple pour-over cone and paper filters, and (in what is perhaps my most tedious Seattleite affectation), I grind the beans by hand. I like the manual aspect of it all. Which is why this robotic pour-over machine is to me so perverse… and so tempting.
Called the Automatica, this gadget, currently raising funds on Kickstarter but seemingly complete as far as development and testing, is basically a way to do pour-over coffee without holding the kettle yourself.
You fill the kettle and place your mug and cone on the stand in front of it. The water is brought to a boil and the kettle tips automatically. Then the whole mug-and-cone portion spins slowly, distributing the water around the grounds, stopping after 11 ounces has been distributed over the correct duration. You can use whatever cone and mug you want as long as they’re about the right size.
Of course, the whole point of pour-over coffee is that it’s simple: you can do it at home, while on vacation, while hiking or indeed at a coffee shop with a bare minimum of apparatus. All you need is the coffee beans, the cone, a paper filter — although some cones omit even that — and of course a receptacle for the product. (It’s not the simplest — that’d be Turkish, but that’s coffee for werewolves.)
Why should anyone want to disturb this simplicity? Well, the same reason we have the other 20 methods for making coffee: convenience. And in truth, pour-over is already automated in the form of drip machines. So the obvious next question is, why this dog and pony show of an open-air coffee bot?
Aesthetics! Nothing wrong with that. What goes on in the obscure darkness of a drip machine? No one knows. But this — this you can watch, audit, understand. Even if the machinery is complex, the result is simple: hot water swirls gently through the grounds. And although it’s fundamentally a bit absurd, it is a good-looking machine, with wood and brass accents and a tasteful kettle shape. (I do love a tasteful kettle.)
The creators say the machine is built to last “generations,” a promise which must of course be taken with a grain of salt. Anything with electronics has the potential to short out, to develop a bug, to be troubled by humidity or water leaks. The heating element may fail. The motor might stutter or a hinge catch.
But all that is true of most coffee machines, and unlike those, this one appears to be made with care and high-quality materials. The cracking and warping you can expect in thin molded plastic won’t happen to this thing, and if you take care of it, it should at least last several years.
And it better, for the minimum pledge price that gets you a machine: $450. That’s quite a chunk of change. But like audiophiles, coffee people are kind of suckers for a nice piece of equipment.
There is of course the standard crowdfunding caveat emptor; this isn’t a pre-order but a pledge to back this interesting hardware startup, and if it’s anything like the last five or six campaigns I’ve backed, it’ll arrive late after facing unforeseen difficulties with machining, molds, leaks and so on.

Source: Gadgets – techcrunch

Autonomous retail startup Inokyo’s first store feels like stealing

Autonomous retail startup Inokyo’s first store feels like stealing

Inokyo wants to be the indie Amazon Go. It’s just launched its prototype cashierless autonomous retail store. Cameras track what you grab from shelves, and with a single QR scan of its app on your way in and out of the store, you’re charged for what you got.

Inokyo‘s first store is now open on Mountain View’s Castro Street selling an array of bougie kombuchas, snacks, protein powders and bath products. It’s sparse and a bit confusing, but offers a glimpse of what might be a commonplace shopping experience five years from now. You can get a glimpse yourself in our demo video below:

“Cashierless stores will have the same level of impact on retail as self-driving cars will have on transportation,” Inokyo co-founder Tony Francis tells me. “This is the future of retail. It’s inevitable that stores will become increasingly autonomous.”

Inokyo (rhymes with Tokyo) is now accepting signups for beta customers who want early access to its Mountain View store. The goal is to collect enough data to dictate the future product array and business model. Inokyo is deciding whether it wants to sell its technology as a service to other retail stores, run its own stores or work with brands to improve their product’s positioning based on in-store sensor data on custom behavior.

We knew that building this technology in a lab somewhere wouldn’t yield a successful product,” says Francis. “Our hypothesis here is that whoever ships first, learns in the real world and iterates the fastest on this technology will be the ones to make these stores ubiquitous.” Inokyo might never rise into a retail giant ready to compete with Amazon and Whole Foods. But its tech could even the playing field, equipping smaller businesses with the tools to keep tech giants from having a monopoly on autonomous shopping experiences.

It’s about what cashiers do instead

Amazon isn’t as ahead as we assumed,” Francis remarks. He and his co-founder Rameez Remsudeen took a trip to Seattle to see the Amazon Go store that first traded cashiers for cameras in the U.S. Still, they realized, “This experience can be magical.” The two met at Carnegie Mellon through machine learning classes before they went on to apply that knowledge at Instagram and Uber. The two decided that if they jumped into autonomous retail soon enough, they could still have a say in shaping its direction.

Next week, Inokyo will graduate from Y Combinator’s accelerator that provided its initial seed funding. In six weeks during the program, they found a retail space on Mountain View’s main drag, studied customer behaviors in traditional stores, built an initial product line and developed the technology to track what users are taking off the shelves.

Here’s how the Inokyo store works. You download its app and connect a payment method, and you get a QR code that you wave in front of a little sensor as you stroll into the shop. Overhead cameras will scan your body shape and clothing without facial recognition in order to track you as you move around the store. Meanwhile, on-shelf cameras track when products are picked up or put back. Combined, knowing who’s where and what’s grabbed lets it assign the items to your cart. You scan again on your way out, and later you get a receipt detailing the charges.

Originally, Inokyo actually didn’t make you scan on the way out, but it got the feedback that customers were scared they were actually stealing. The scan-out is more about peace of mind than engineering necessity. There is a subversive pleasure to feeling like, “well, if Inokyo didn’t catch all the stuff I chose, that’s not my problem.” And if you’re overcharged, there’s an in-app support button for getting a refund.

Inokyo co-founders (from left): Tony Francis and Rameez Remsudeen

Inokyo was accurate in what it charged me despite me doing a few switcharoos with products I nabbed. But there were only about three people in the room at the time. The real test for these kinds of systems are when a rush of customers floods in and cameras have to differentiate between multiple similar-looking people. Inokyo will likely need to be more than 99 percent accurate to be more of a help than a headache. An autonomous store that constantly over- or undercharges would be more trouble than it’s worth, and patrons would just go to the nearest classic shop.

Just because autonomous retail stores will be cashier-less doesn’t mean they’ll have no staff. To maximize cost-cutting, they could just trust that people won’t loot it. However, Inokyo plans to have someone minding the shop to make sure people scan in the first place and to answer questions about the process. But there’s also an opportunity in reassigning labor from being cashiers to concierges that can recommend the best products or find what’s the right fit for the customer. These stores will be judged by the convenience of the holistic experience, not just the tech. At the very least, a single employee might be able to handle restocking, customer support and store maintenance once freed from cashier duties.

The Amazon Go autonomous retail store in Seattle is equipped with tons of overhead cameras

While Amazon Go uses cameras in a similar way to Inokyo, it also relies on weight sensors to track items. There are plenty of other companies chasing the cashierless dream. China’s BingoBox has nearly $100 million in funding and has more than 300 stores, though they use less sophisticated RFID tags. Fellow Y Combinator startup Standard Cognition has raised $5 million to equip old-school stores with autonomous camera-tech. AiFi does the same, but touts that its cameras can detect abnormal behavior that might signal someone is a shoplifter.

The store of the future seems like more and more of a sure thing. The race’s winner will be determined by who builds the most accurate tracking software, easy-to-install hardware and pleasant overall shopping flow. If this modular technology can cut costs and lines without alienating customers, we could see our local brick-and-mortars adapt quickly. The bigger question than if or even when this future arrives is what it will mean for the millions of workers who make their living running the checkout lane.

Source: Mobile – Techcruch

VR optics could help old folks keep the world in focus

VR optics could help old folks keep the world in focus
The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.
I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.
There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?
That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.
Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.
This is an old prototype, but you get the idea.
It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.
Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.
In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.
The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.
“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”
The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Source: Gadgets – techcrunch

This robot maintains tender, unnerving eye contact

This robot maintains tender, unnerving eye contact
Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.
The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.
It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.
At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.
In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.
Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.
This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Source: Gadgets – techcrunch

StarVR’s One headset flaunts eye-tracking and a double-wide field of view

StarVR’s One headset flaunts eye-tracking and a double-wide field of view
While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.
The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.
That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)
On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.
In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.
To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.
It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.
The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels.
Let’s do the math: 16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.
That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.
The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.
Unfortunately, the demos of eye-tracking were pretty limited (think a heat map of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.
One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.
The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.
Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.

Source: Gadgets – techcrunch