‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely
Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.
“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”
Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.
The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.
The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle
The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.
Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.
The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.
Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”
Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Source: Gadgets – techcrunch

Autonomous retail startup Inokyo’s first store feels like stealing

Autonomous retail startup Inokyo’s first store feels like stealing

Inokyo wants to be the indie Amazon Go. It’s just launched its prototype cashierless autonomous retail store. Cameras track what you grab from shelves, and with a single QR scan of its app on your way in and out of the store, you’re charged for what you got.

Inokyo‘s first store is now open on Mountain View’s Castro Street selling an array of bougie kombuchas, snacks, protein powders and bath products. It’s sparse and a bit confusing, but offers a glimpse of what might be a commonplace shopping experience five years from now. You can get a glimpse yourself in our demo video below:

“Cashierless stores will have the same level of impact on retail as self-driving cars will have on transportation,” Inokyo co-founder Tony Francis tells me. “This is the future of retail. It’s inevitable that stores will become increasingly autonomous.”

Inokyo (rhymes with Tokyo) is now accepting signups for beta customers who want early access to its Mountain View store. The goal is to collect enough data to dictate the future product array and business model. Inokyo is deciding whether it wants to sell its technology as a service to other retail stores, run its own stores or work with brands to improve their product’s positioning based on in-store sensor data on custom behavior.

We knew that building this technology in a lab somewhere wouldn’t yield a successful product,” says Francis. “Our hypothesis here is that whoever ships first, learns in the real world and iterates the fastest on this technology will be the ones to make these stores ubiquitous.” Inokyo might never rise into a retail giant ready to compete with Amazon and Whole Foods. But its tech could even the playing field, equipping smaller businesses with the tools to keep tech giants from having a monopoly on autonomous shopping experiences.

It’s about what cashiers do instead

Amazon isn’t as ahead as we assumed,” Francis remarks. He and his co-founder Rameez Remsudeen took a trip to Seattle to see the Amazon Go store that first traded cashiers for cameras in the U.S. Still, they realized, “This experience can be magical.” The two met at Carnegie Mellon through machine learning classes before they went on to apply that knowledge at Instagram and Uber. The two decided that if they jumped into autonomous retail soon enough, they could still have a say in shaping its direction.

Next week, Inokyo will graduate from Y Combinator’s accelerator that provided its initial seed funding. In six weeks during the program, they found a retail space on Mountain View’s main drag, studied customer behaviors in traditional stores, built an initial product line and developed the technology to track what users are taking off the shelves.

Here’s how the Inokyo store works. You download its app and connect a payment method, and you get a QR code that you wave in front of a little sensor as you stroll into the shop. Overhead cameras will scan your body shape and clothing without facial recognition in order to track you as you move around the store. Meanwhile, on-shelf cameras track when products are picked up or put back. Combined, knowing who’s where and what’s grabbed lets it assign the items to your cart. You scan again on your way out, and later you get a receipt detailing the charges.

Originally, Inokyo actually didn’t make you scan on the way out, but it got the feedback that customers were scared they were actually stealing. The scan-out is more about peace of mind than engineering necessity. There is a subversive pleasure to feeling like, “well, if Inokyo didn’t catch all the stuff I chose, that’s not my problem.” And if you’re overcharged, there’s an in-app support button for getting a refund.

Inokyo co-founders (from left): Tony Francis and Rameez Remsudeen

Inokyo was accurate in what it charged me despite me doing a few switcharoos with products I nabbed. But there were only about three people in the room at the time. The real test for these kinds of systems are when a rush of customers floods in and cameras have to differentiate between multiple similar-looking people. Inokyo will likely need to be more than 99 percent accurate to be more of a help than a headache. An autonomous store that constantly over- or undercharges would be more trouble than it’s worth, and patrons would just go to the nearest classic shop.

Just because autonomous retail stores will be cashier-less doesn’t mean they’ll have no staff. To maximize cost-cutting, they could just trust that people won’t loot it. However, Inokyo plans to have someone minding the shop to make sure people scan in the first place and to answer questions about the process. But there’s also an opportunity in reassigning labor from being cashiers to concierges that can recommend the best products or find what’s the right fit for the customer. These stores will be judged by the convenience of the holistic experience, not just the tech. At the very least, a single employee might be able to handle restocking, customer support and store maintenance once freed from cashier duties.

The Amazon Go autonomous retail store in Seattle is equipped with tons of overhead cameras

While Amazon Go uses cameras in a similar way to Inokyo, it also relies on weight sensors to track items. There are plenty of other companies chasing the cashierless dream. China’s BingoBox has nearly $100 million in funding and has more than 300 stores, though they use less sophisticated RFID tags. Fellow Y Combinator startup Standard Cognition has raised $5 million to equip old-school stores with autonomous camera-tech. AiFi does the same, but touts that its cameras can detect abnormal behavior that might signal someone is a shoplifter.

The store of the future seems like more and more of a sure thing. The race’s winner will be determined by who builds the most accurate tracking software, easy-to-install hardware and pleasant overall shopping flow. If this modular technology can cut costs and lines without alienating customers, we could see our local brick-and-mortars adapt quickly. The bigger question than if or even when this future arrives is what it will mean for the millions of workers who make their living running the checkout lane.

Source: Mobile – Techcruch

This robot maintains tender, unnerving eye contact

This robot maintains tender, unnerving eye contact
Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.
The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.
It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.
At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.
In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.
Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.
This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Source: Gadgets – techcrunch

This bipedal robot has a flying head

This bipedal robot has a flying head
Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things Atlas can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter?
University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk.
“The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told IEEE.
The robot is similar to the bizarre-looking Ballu, a blimp robot with a floating head and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.

Source: Gadgets – techcrunch

This happy robot helps kids with autism

This happy robot helps kids with autism
A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting.
The project comes from LuxAI, a spin-off of the University of Luxembourg. They will present their findings at the RO-MAN 2018 conference at the end of this month.
“The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child,” co-founder Aida Nazarikhorram told IEEE. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.”
The robot reduces anxiety in autistic children and the researchers saw many behaviors – hand flapping, for example – slow down with the robot in the mix.
Interestingly the robot is a better choice for children than an app or tablet. Because the robot is “embodied,” the researchers found that it that draws attention and improves learning, especially when compared to a standard iPad/educational app pairing. In other words children play with tablets and work with robots.
The robot is entirely self-contained and easily programmable. It can run for hours at a time and includes a 3D camera and full processor.
The researchers found that the robot doesn’t become the focus of the therapy but instead helps the therapist connect with the patient. This, obviously, is an excellent outcome for an excellent (and cute) little piece of technology.

Source: Gadgets – techcrunch

Analysis backs claim drones were used to attack Venezuela’s president

Analysis backs claim drones were used to attack Venezuela’s president
Analysis of open source information carried out by the investigative website Bellingcat suggests drones that had been repurposed as flying bombs were indeed used in an attack on the president of Venezuela this weekend.
The Venezuelan government claimed three days ago that an attempt had been made to assassinate President Nicolás Maduro using two drones loaded with explosives. The president had been giving a speech which was being broadcast live on television when the incident occurred.
Initial video from a state-owned television network showed the reaction of Maduro, those around him and a parade of soldiers at the event to what appeared to be two blasts somewhere off camera. But the footage did not include shots of any drones or explosions.
AP also reported that firefighters at scene had shed doubt on the drone attack claim — suggesting there had instead been a gas explosion in a nearby flat.
Since then more footage has emerged, including videos purporting to show a drone exploding and a drone tumbling alongside a building.

Vídeo prueba del segundo drone que exploto en el aire sin causar daños colaterales #Sucesos Vídeo cortesía pic.twitter.com/ipWR2sbYvW
— Caracas News 24 (@CaracasNews24) August 5, 2018

Bellingcat has carried out an analysis of publicly available information related to the attack, including syncing timings of the state broadcast of Maduro’s speech, and using frame-by-frame analysis combined with photos and satellite imagery of Caracas to try to pinpoint locations of additional footage that has emerged to determine whether the drone attack claim stands up.
The Venezuelan government has claimed the drones used were DJI Matrice 600s, each carrying approximately 1kg of C4 plastic explosive and, when detonated, capable of causing damage at a radius of around 50 meters.

DJI Matrice 600 drones are a commercial model, normally used for industrial work — with a U.S. price tag of around $5,000 apiece, suggesting the attack could have cost little over $10k to carry out — with 1kg of plastic explosive available commercially (for demolition purposes) at a cost of around $30.
Bellingcat says its analysis supports the government’s claim that the drone model used was a DJI Matrice 600, noting that the drones involved in the event each had six rotors. It also points to a photo of drone wreckage which appears to show the distinctive silver rotor tip of the model, although it also notes the drones appear to have had their legs removed.
Venezuela’s interior minister, Nestor Reverol, also claimed the government thwarted the attack using “special techniques and [radio] signal inhibitors”, which “disoriented” the drone that detonated closest to the presidential stand — a capability Bellingcat notes the Venezuelan security services are reported to have.
The second drone was said by Reverol to have “lost control” and crashed into a nearby building.
Bellingcat says it is possible to geolocate the video of the falling drone to the same location as the fire in the apartment that firefighters had claimed was caused by a gas canister explosion. It adds that images taken of this location during the fire show a hole in the wall of the apartment in the vicinity of where the drone would have crashed.
“It is a very likely possibility that the downed drone subsequently detonated, creating the hole in the wall of this apartment, igniting a fire, and causing the sound of the second explosion which can be heard in Video 2 [of the state TV broadcast of Maduro’s speech],” it further suggests.
Here’s its conclusion:
From the open sources of information available, it appears that an attack took place using two DBIEDs while Maduro was giving a speech. Both the drones appear visually similar to DJI Matrice 600s, with at least one displaying features that are consistent with this model. These drones appear to have been loaded with explosive and flown towards the parade.
The first drone detonated somewhere above or near the parade, the most likely cause of the casualties announced by the Venezuelan government and pictured on social media. The second drone crashed and exploded approximately 14 seconds later and 400 meters away from the stage, and is the most likely cause of the fire which the Venezuelan firefighters described.
It also considers the claim of attribution by a group on social media, calling itself “Soldados de Franelas” (aka ‘T-Shirt Soldiers’ — a reference to a technique used by protestors wrapping a t-shirt around their head to cover their face and protect their identity), suggesting it’s not clear from the group’s Twitter messages that they are “unequivocally claiming responsibility for the event”, owing to use of passive language, and to a claim that the drones were shot down by government snipers — which it says “does not appear to be supported by the open source information available”.

Source: Gadgets – techcrunch

NASA’s Open Source Rover lets you build your own planetary exploration platform

NASA’s Open Source Rover lets you build your own planetary exploration platform
Got some spare time this weekend? Why not build yourself a working rover from plans provided by NASA? The spaceniks at the Jet Propulsion Laboratory have all the plans, code, and materials for you to peruse and use — just make sure you’ve got $2,500 and a bit of engineering know-how. This thing isn’t made out of Lincoln Logs.
The story is this: after Curiosity landed on Mars, JPL wanted to create something a little smaller and less complex that it could use for educational purposes. ROV-E, as they called this new rover, traveled with JPL staff throughout the country.
Unsurprisingly, among the many questions asked was often whether a class or group could build one of their own. The answer, unfortunately, was no: though far less expensive and complex than a real Mars rover, ROV-E was still too expensive and complex to be a class project. So JPL engineers decided to build one that wasn’t.
The result is the JPL Open Source Rover, a set of plans that mimic the key components of Curiosity but are simpler and use off the shelf components.
“I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others,” said JPL’s Tom Soderstrom in a post announcing the OSR. “We wanted to give back to the community and lower the barrier of entry by giving hands on experience to the next generation of scientists, engineers, and programmers.”
The OSR uses Curiosity-like “Rocker-Bogie” suspension, corner steering and pivoting differential, allowing movement over rough terrain, and the brain is a Raspberry Pi. You can find all the parts in the usual supply catalogs and hardware stores, but you’ll also need a set of basic tools: a bandsaw to cut metal, a drill press is probably a good idea, a soldering iron, snips and wrenches, and so on.
“In our experience, this project takes no less than 200 person-hours to build, and depending on the familiarity and skill level of those involved could be significantly more,” the project’s creators write on the GitHub page.
So basically unless you’re literally rocket scientists, expect double that. Although JPL notes that they did work with schools to adjust the building process and instructions.
There’s flexibility built into the plans, too. So you can load custom apps, connect payloads and sensors to the brain, and modify the mechanics however you’d like. It’s open source, after all. Make it your own.
“We released this rover as a base model. We hope to see the community contribute improvements and additions, and we’re really excited to see what the community will add to it,” said project manager Mik Cox. “I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others.”

Source: Gadgets – techcrunch

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors
Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.
Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

OpenAI’s ‘Dota 2’ neural nets are defeating human opponents

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.
The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.
The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)
In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.
They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.
The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.
What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.
This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.
As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Source: Gadgets – techcrunch

PSA: Drone flight restrictions are in force in the UK from today

PSA: Drone flight restrictions are in force in the UK from today
Consumers using drones in the UK have new safety restrictions they must obey starting today, with a change to the law prohibiting drones from being flown above 400ft or within 1km of an airport boundary.
Anyone caught flouting the new restrictions could be charged with recklessly or negligently acting in a manner likely to endanger an aircraft or a person in an aircraft — which carries a penalty of up to five years in prison or an unlimited fine, or both.
The safety restrictions were announced by the government in May, and have been brought in via an amendment the 2016 Air Navigation Order.
They’re a stop-gap because the government has also been working on a full drone bill — which was originally slated for Spring but has been delayed.
However the height and airport flight restrictions for drones were pushed forward, given the clear safety risks — after a year-on-year increase in reports of drone incidents involving aircraft.
The Civil Aviation Authority has today published research to coincide with the new laws, saying it’s found widespread support among the public for safety regulations for drones.
Commenting in a statement, the regulator’s assistant director Jonathan Nicholson said: “Drones are here to stay, not only as a recreational pastime, but as a vital tool in many industries — from agriculture to blue-light services — so increasing public trust through safe drone flying is crucial.”
“As recreational drone use becomes increasingly widespread across the UK it is heartening to see that awareness of the Dronecode has also continued to rise — a clear sign that most drone users take their responsibility seriously and are a credit to the community,” he added, referring to the (informal) set of rules developed by the body to promote safe use of consumer drones — ahead of the government legislating.
Additional measures the government has confirmed it will legislate for — announced last summer — include a requirement for owners of drones weighing 250 grams or more to register with the CAA, and for drone pilots to take an online safety test. The CAA says these additional requirements will be enforced from November 30, 2019 — with more information on the registration scheme set to follow next year.
For now, though, UK drone owners just need to make sure they’re not flying too high or too close to airports.
Earlier this month it emerged the government is considering age restrictions on drone use too. Though it remains to be seen whether or not those proposals will make it into the future drone bill.

Source: Gadgets – techcrunch

This smart prosthetic ankle adjusts to rough terrain

This smart prosthetic ankle adjusts to rough terrain
Prosthetic limbs are getting better and more personalized, but useful as they are, they’re still a far cry from the real thing. This new prosthetic ankle is a little closer than others, though: it moves on its own, adapting to its user’s gait and the surface on which it lands.
Your ankle does a lot of work when you walk: lifting your toe out of the way so you don’t scuff it on the ground, controlling the tilt of your foot to minimize the shock when it lands or as you adjust your weight, all while conforming to bumps and other irregularities it encounters. Few prostheses attempt to replicate these motions, meaning all that work is done in a more basic way, like the bending of a spring or compression of padding.
But this prototype ankle from Michael Goldfarb, a mechanical engineering professor at Vanderbilt, goes much further than passive shock absorption. Inside the joint are a motor and actuator, controlled by a chip that senses and classifies motion and determines how each step should look.

Po 3D prints personalized prosthetic hands for the needy in South America

“This device first and foremost adapts to what’s around it,” Goldfarb said in a video documenting the prosthesis.
“You can walk up slopes, down slopes, up stairs and down stairs, and the device figures out what you’re doing and functions the way it should,” he added in a news release from the university.
When it senses that the foot has lifted up for a step, it can lift the toe up to keep it clear, also exposing the heel so that when the limb comes down, it can roll into the next step. And by reading the pressure both from above (indicating how the person is using that foot) and below (indicating the slope and irregularities of the surface) it can make that step feel much more like a natural one.

One veteran of many prostheses, Mike Sasser, tested the device and had good things to say: “I’ve tried hydraulic ankles that had no sort of microprocessors, and they’ve been clunky, heavy and unforgiving for an active person. This isn’t that.”
Right now the device is still very lab-bound, and it runs on wired power — not exactly convenient if someone wants to go for a walk. But if the joint works as designed, as it certainly seems to, then powering it is a secondary issue. The plan is to commercialize the prosthesis in the next couple of years once all that is figured out. You can learn a bit more about Goldfarb’s research at the Center for Intelligent Mechatronics.

Source: Gadgets – techcrunch