The Xbox Adaptive Controller goes on sale today and is also now part of the V&A museum’s collection

The Xbox Adaptive Controller goes on sale today and is also now part of the V&A museum’s collection
In an important move for inclusion in the gaming community, the Xbox Adaptive Controller, created for gamers with mobility issues, is now on sale. The Victoria and Albert Museum (V&A) also announced today that it has acquired the Xbox Adaptive Controller for display in its Rapid Response gallery dedicated to current events and pop culture.
First introduced in May, the Xbox Adaptive Controller can now be purchased online for $99.99. To create the controller, Microsoft collaborated with gamers with disabilities and limited mobility, as well as partners from several organizations, including the AbleGamers Charity, the Cerebral Palsy Foundation, Special Effect and Warfighter Engaged.
According to Microsoft, the Xbox Adaptive Controller project first took root in 2014 when one of its engineers spotted a custom gaming controller made by Warfighter Engaged, a non-profit that provides gaming devices for wounded and disabled veterans. During several of Microsoft’s hackathons, teams of employees began working on gaming devices for people with limited mobility, which in turn gave momentum to the development of the Xbox Adaptive Controller.
In its announcement, the V&A said it added the Xbox Adaptive Controller to its collection because “as the first adaptive controller designed and manufactured at large-scale by a leading technology company, it represents a landmark moment in videogame play, and demonstrates how design can be harnessed to encourage inclusively and access.”
The Xbox Adaptive Controller features two large buttons that can be programmed to fit its user’s needs, as well as 19 jacks and two USB ports that are spread out in a single line on the back of the device to make them easier to access. Symbols embossed along the back of the controller’s top help identify ports so gamers don’t have to turn it around or lift it up to find the one they need, while grooves serve as guidelines to help them plug in devices. Based on gamer feedback, Microsoft moved controls including the D Pad to the side of the device and put the A and B buttons closer together, so users can easily move between them with one hand.
The controller slopes down toward the front, enabling gamers to slide their hands onto it without having to lift them (and also makes it easier to control with feet) and has rounded edges to reduce the change of injury if it’s dropped on a foot. The Xbox Adaptive Controller was designed to rest comfortably in laps and also has three threaded inserts so it can be mounted with hardware to wheelchairs, lap boards or desks.
In terms of visual design, the Xbox Adaptive Controller is sleek and unobtrusive, since Microsoft heard from many gamers with limited mobility that they dislike using adaptive devices because they often look like toys. The company’s attention to detail also extends into the controller’s packaging, which is very easy to unbox because gamers told Microsoft that they are often forced to open boxes and other product packages with their teeth.

Source: Gadgets – techcrunch

VR optics could help old folks keep the world in focus

VR optics could help old folks keep the world in focus
The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.
I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.
There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?
That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.
Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.
This is an old prototype, but you get the idea.
It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.
Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.
In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.
The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.
“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”
The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Source: Gadgets – techcrunch

This smart prosthetic ankle adjusts to rough terrain

This smart prosthetic ankle adjusts to rough terrain
Prosthetic limbs are getting better and more personalized, but useful as they are, they’re still a far cry from the real thing. This new prosthetic ankle is a little closer than others, though: it moves on its own, adapting to its user’s gait and the surface on which it lands.
Your ankle does a lot of work when you walk: lifting your toe out of the way so you don’t scuff it on the ground, controlling the tilt of your foot to minimize the shock when it lands or as you adjust your weight, all while conforming to bumps and other irregularities it encounters. Few prostheses attempt to replicate these motions, meaning all that work is done in a more basic way, like the bending of a spring or compression of padding.
But this prototype ankle from Michael Goldfarb, a mechanical engineering professor at Vanderbilt, goes much further than passive shock absorption. Inside the joint are a motor and actuator, controlled by a chip that senses and classifies motion and determines how each step should look.

Po 3D prints personalized prosthetic hands for the needy in South America

“This device first and foremost adapts to what’s around it,” Goldfarb said in a video documenting the prosthesis.
“You can walk up slopes, down slopes, up stairs and down stairs, and the device figures out what you’re doing and functions the way it should,” he added in a news release from the university.
When it senses that the foot has lifted up for a step, it can lift the toe up to keep it clear, also exposing the heel so that when the limb comes down, it can roll into the next step. And by reading the pressure both from above (indicating how the person is using that foot) and below (indicating the slope and irregularities of the surface) it can make that step feel much more like a natural one.

One veteran of many prostheses, Mike Sasser, tested the device and had good things to say: “I’ve tried hydraulic ankles that had no sort of microprocessors, and they’ve been clunky, heavy and unforgiving for an active person. This isn’t that.”
Right now the device is still very lab-bound, and it runs on wired power — not exactly convenient if someone wants to go for a walk. But if the joint works as designed, as it certainly seems to, then powering it is a secondary issue. The plan is to commercialize the prosthesis in the next couple of years once all that is figured out. You can learn a bit more about Goldfarb’s research at the Center for Intelligent Mechatronics.

Source: Gadgets – techcrunch

HoloLens acts as eyes for blind users and guides them with audio prompts

HoloLens acts as eyes for blind users and guides them with audio prompts
Microsoft’s HoloLens has an impressive ability to quickly sense its surroundings, but limiting it to displaying emails or game characters on them would show a lack of creativity. New research shows that it works quite well as a visual prosthesis for the vision impaired, not relaying actual visual data but guiding them in real time with audio cues and instructions.
The researchers, from Caltech and University of Southern California, first argue that restoring vision is at present simply not a realistic goal, but that replacing the perception portion of vision isn’t necessary to replicate the practical portion. After all, if you can tell where a chair is, you don’t need to see it to avoid it, right?
Crunching visual data and producing a map of high-level features like walls, obstacles and doors is one of the core capabilities of the HoloLens, so the team decided to let it do its thing and recreate the environment for the user from these extracted features.
They designed the system around sound, naturally. Every major object and feature can tell the user where it is, either via voice or sound. Walls, for instance, hiss (presumably a white noise, not a snake hiss) as the user approaches them. And the user can scan the scene, with objects announcing themselves from left to right from the direction in which they are located. A single object can be selected and will repeat its callout to help the user find it.
That’s all well for stationary tasks like finding your cane or the couch in a friend’s house. But the system also works in motion.
The team recruited seven blind people to test it out. They were given a brief intro but no training, and then asked to accomplish a variety of tasks. The users could reliably locate and point to objects from audio cues, and were able to find a chair in a room in a fraction of the time they normally would, and avoid obstacles easily as well.
This render shows the actual paths taken by the users in the navigation tests
Then they were tasked with navigating from the entrance of a building to a room on the second floor by following the headset’s instructions. A “virtual guide” repeatedly says “follow me” from an apparent distance of a few feet ahead, while also warning when stairs were coming, where handrails were and when the user had gone off course.
All seven users got to their destinations on the first try, and much more quickly than if they had had to proceed normally with no navigation. One subject, the paper notes, said “That was fun! When can I get one?”
Microsoft actually looked into something like this years ago, but the hardware just wasn’t there — HoloLens changes that. Even though it is clearly intended for use by sighted people, its capabilities naturally fill the requirements for a visual prosthesis like the one described here.
Interestingly, the researchers point out that this type of system was also predicted more than 30 years ago, long before they were even close to possible:
“I strongly believe that we should take a more sophisticated approach, utilizing the power of artificial intelligence for processing large amounts of detailed visual information in order to substitute for the missing functions of the eye and much of the visual pre-processing performed by the brain,” wrote the clearly far-sighted C.C. Collins way back in 1985.
The potential for a system like this is huge, but this is just a prototype. As systems like HoloLens get lighter and more powerful, they’ll go from lab-bound oddities to everyday items — one can imagine the front desk at a hotel or mall stocking a few to give to vision-impaired folks who need to find their room or a certain store.
“By this point we expect that the reader already has proposals in mind for enhancing the cognitive prosthesis,” they write. “A hardware/software platform is now available to rapidly implement those ideas and test them with human subjects. We hope that this will inspire developments to enhance perception for both blind and sighted people, using augmented auditory reality to communicate things that we cannot see.”

Source: Gadgets – techcrunch

Microsoft’s Xbox Adaptive Controller is an inspiring example of inclusive design

Microsoft’s Xbox Adaptive Controller is an inspiring example of inclusive design
Every gamer with a disability faces a unique challenge for many reasons, one of which is the relative dearth of accessibility-focused peripherals for consoles. Microsoft is taking a big step toward fixing this with its Xbox Adaptive Controller, a device created to address the needs of gamers for whom ordinary gamepads aren’t an option.
The XAC, revealed officially at a recent event but also leaked a few days ago, is essentially a pair of gigantic programmable buttons and an oversized directional pad; 3.5mm ports on the back let a huge variety of assistive devices like blow tubes, pedals and Microsoft-made accessories plug in.
It’s not meant to be an all-in-one solution by any means, more like a hub that allows gamers with disabilities to easily make and adjust their own setups with a minimum of hassle. Whatever you’re capable of, whatever’s comfortable, whatever gear you already have, the XAC is meant to enable it.
I’d go into detail, but it would be impossible to do better than Microsoft’s extremely interesting and in-depth post introducing the XAC, which goes into the origins of the hardware, the personal stories of the testers and creators and much more. Absolutely worth taking the time to read.
I look forward to hearing more about the system and how its users put it to use, and I’m glad to see inclusivity and accessibility being pursued in such a practical and carefully researched manner.

Source: Gadgets – techcrunch

Get the latest TC stories read to you over the phone with BrailleVoice

Get the latest TC stories read to you over the phone with BrailleVoice

For the visually impaired, there are lots of accessibility options if you want to browse the web — screen readers, podcast versions of articles and so on. But it can still be a pain to keep up with your favorite publications the way sighted app users do. BrailleVoice is a project that puts the news in a touch-tone phone interface, reading you the latest news from your favorite publications (like this one) easily from anywhere you get a signal.

It’s from SpaceNext, AKA Shan, who has a variety of useful little apps he’s developed over the years on his page — John wrote up one back in 2011. Several of them have an accessibility aspect to them, something that always piques my interest.

“Visually challenged users will find it difficult to navigate using apps,” he wrote in an email. “I thought with text to speech readily available… they would be able to make a call to a toll free number to listen to latest news from any site.”

All you do is dial 1-888-666-4013, then listen to the options on the menu. TechCrunch is the first outlet listed, so hit 1# and it’ll read out the headlines. Select one (of mine) and it’ll jump right in. That’s it! There are a couple of dozen sites listed right now, from LifeHacker (hit 15#) to the Times of India (hit 26#). You can also suggest new sites to add, presumably as long as they have some kind of RSS feed. (This should be a reminder why you should keep your website or news service accessible in some like manner.)

“More importantly,” he continued, “this works even without internet even in the remotest of places. You can listen to your favorite news site without having to spend a dime or worry about internet.”

Assuming you can get a voice signal and you’ve got minutes, anyway. I quite like the idea of someone walking into the nearest town, pulling out their old Nokia, dialing this up and keeping up to date with the most news-addicted of us.

The text to speech engine is pretty rudimentary, but it’s better than what we all had a few years back, and it’ll only get better as improved engines like Google’s and Apple’s trickle down for general purpose use. I’m going to ask them about that, actually.

It’s quite a basic service, but what more does it need to have, really? Shan is planning to integrate voice controls into the likes of Google Home and Alexa, so there’s that. But as is it may be enough to provide plenty of utility to the vision-impaired. Check out TextOnly too. I could use that for desktop.

Source: Mobile – Techcruch