‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely
Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.
“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”
Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.
The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.
The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle
The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.
Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.
The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.
Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”
Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Source: Gadgets – techcrunch

VR optics could help old folks keep the world in focus

VR optics could help old folks keep the world in focus
The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.
I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.
There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?
That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.
Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.
This is an old prototype, but you get the idea.
It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.
Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.
In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.
The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.
“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”
The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Source: Gadgets – techcrunch

NASA’s Parker Solar Probe launches to ‘touch the sun’

NASA’s Parker Solar Probe launches to ‘touch the sun’
Update: Launch successful!
NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:53 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.
If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.
This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly.
(Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.
It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.
Go on – it’s quite cool.
The car-sized Parker will orbit the sun and constantly rotate itself so the heat shield is facing inward and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.
And such instruments! There are three major experiments or instrument sets on the probe.
WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation and other nuisances will produce an amazingly clear picture.
SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.
FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.
They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.
Even then, they’ll get so hot that the team needed to implement the first-ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.
The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that seven more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.
On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.
It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.
The mission is scheduled to last seven years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.
The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Source: Gadgets – techcrunch

This 3D-printed camp stove is extra-efficient and wind-resistant

This 3D-printed camp stove is extra-efficient and wind-resistant
I love camping, but there’s always an awkward period when you’ve left the tent but haven’t yet created coffee that I hate camping. It’s hard not to watch the pot not boil and not want to just go back to bed, but since the warm air escaped when I opened the tent it’s pointless! Anyway, the Swiss figured out a great way to boil water faster, and I want one of these sweet stoves now.
The PeakBoil stove comes from design students at ETH Zurich, who have clearly faced the same problems as myself. But since they actually camp in inclement weather, they also have to deal with wind blowing out the feeble flame of an ordinary gas burner.
Their attempt to improve on the design takes the controversial step of essentially installing a stovepipe inside the vessel and heating it from the inside out rather than from the bottom up. This has been used in lots of other situations to heat water but it’s the first time I’ve seen it in a camp stove.
By carefully configuring the gas nozzles and adding ripples to the wall of the heat pipe, PeakBoil “increases the contact area between the flame and the jug,” explained doctoral student and project leader Julian Ferchow in an ETH Zurich news release.
“That, plus the fact that the wall is very thin, makes heat transfer to the contents of the jug ideal,” added his colleague Patrick Beutler.

Keeping the flames isolated inside the chimney behind baffles minimizes wind interference with the flames, and prevents you having to burn extra gas to keep it alive.
The design was created using a selective laser melting or sintering process, in which metal powder is melted in a pattern much like a 3D printer lays down heated plastic. It’s really just another form of additive manufacturing, and it gave the students “a huge amount of design freedom…with metal casting, for instance, we could never achieve channels that are as thin as the ones inside our gas burner,” Ferchow said.
Of course, the design means it’s pretty much only usable for boiling water (you wouldn’t want to balance a pan on top of it), but that’s such a common and specific use case that many campers already have a stove dedicated to the purpose.
The team is looking to further improve the design and also find an industry partner with which to take it to market. MSR, GSI, REI… I’m looking at you. Together we can make my mornings bearable.

Source: Gadgets – techcrunch

NASA’s Open Source Rover lets you build your own planetary exploration platform

NASA’s Open Source Rover lets you build your own planetary exploration platform
Got some spare time this weekend? Why not build yourself a working rover from plans provided by NASA? The spaceniks at the Jet Propulsion Laboratory have all the plans, code, and materials for you to peruse and use — just make sure you’ve got $2,500 and a bit of engineering know-how. This thing isn’t made out of Lincoln Logs.
The story is this: after Curiosity landed on Mars, JPL wanted to create something a little smaller and less complex that it could use for educational purposes. ROV-E, as they called this new rover, traveled with JPL staff throughout the country.
Unsurprisingly, among the many questions asked was often whether a class or group could build one of their own. The answer, unfortunately, was no: though far less expensive and complex than a real Mars rover, ROV-E was still too expensive and complex to be a class project. So JPL engineers decided to build one that wasn’t.
The result is the JPL Open Source Rover, a set of plans that mimic the key components of Curiosity but are simpler and use off the shelf components.
“I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others,” said JPL’s Tom Soderstrom in a post announcing the OSR. “We wanted to give back to the community and lower the barrier of entry by giving hands on experience to the next generation of scientists, engineers, and programmers.”
The OSR uses Curiosity-like “Rocker-Bogie” suspension, corner steering and pivoting differential, allowing movement over rough terrain, and the brain is a Raspberry Pi. You can find all the parts in the usual supply catalogs and hardware stores, but you’ll also need a set of basic tools: a bandsaw to cut metal, a drill press is probably a good idea, a soldering iron, snips and wrenches, and so on.
“In our experience, this project takes no less than 200 person-hours to build, and depending on the familiarity and skill level of those involved could be significantly more,” the project’s creators write on the GitHub page.
So basically unless you’re literally rocket scientists, expect double that. Although JPL notes that they did work with schools to adjust the building process and instructions.
There’s flexibility built into the plans, too. So you can load custom apps, connect payloads and sensors to the brain, and modify the mechanics however you’d like. It’s open source, after all. Make it your own.
“We released this rover as a base model. We hope to see the community contribute improvements and additions, and we’re really excited to see what the community will add to it,” said project manager Mik Cox. “I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others.”

Source: Gadgets – techcrunch

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors
Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.
Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

OpenAI’s ‘Dota 2’ neural nets are defeating human opponents

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.
The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.
The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)
In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.
They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.
The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.
What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.
This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.
As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Source: Gadgets – techcrunch

NASA’s 3D-printed Mars Habitat competition doles out prizes to concept habs

NASA’s 3D-printed Mars Habitat competition doles out prizes to concept habs
A multi-year NASA contest to design a 3D-printable Mars habitat using on-planet materials has just hit another milestone — and a handful of teams have taken home some cold, hard cash. This more laid-back phase had contestants designing their proposed habitat using architectural tools, with the five winners set to build scale models next year.
Technically this is the first phase of the third phase — the (actual) second phase took place last year and teams took home quite a bit of money.
The teams had to put together realistic 3D models of their proposed habitats, and not just in Blender or something. They used Building Information Modeling software that would require these things to be functional structures designed down to a particular level of detail — so you can’t just have 2D walls made of “material TBD,” and you have to take into account thickness from pressure sealing, air filtering elements, heating, etc.
The habitats had to have at least a thousand square feet of space, enough for four people to live for a year, along with room for the machinery and paraphernalia associated with, you know, living on Mars. They must be largely assembled autonomously, at least enough that humans can occupy them as soon as they land. They were judged on completeness, layout, 3D-printing viability and aesthetics.

So although the images you see here look rather sci-fi, keep in mind they were also designed using industrial tools and vetted by experts with “a broad range of experience from Disney to NASA.” These are going to Mars, not paperback. And they’ll have to be built in miniature for real next year, so they better be realistic.
The five winning designs embody a variety of approaches. Honestly all these videos are worth a watch; you’ll probably learn something cool, and they really give an idea of how much thought goes into these designs.

Zopherus has the whole print taking place inside the body of a large lander, which brings its own high-strength printing mix to reinforce the “Martian concrete” that will make up the bulk of the structure. When it’s done printing and embedding the pre-built items like airlocks, it lifts itself up, moves over a few feet, and does it again, creating a series of small rooms. (They took first place and essentially tied the next team for take-home case, a little under $21K.)

AI SpaceFactory focuses on the basic shape of the vertical cylinder as both the most efficient use of space and also one of the most suitable for printing. They go deep on the accommodations for thermal expansion and insulation, but also have thought deeply about how to make the space safe, functional, and interesting. This one is definitely my favorite.

Kahn-Yates has a striking design, with a printed structural layer giving way to a high-strength plastic layer that lets the light in. Their design is extremely spacious but in my eyes not very efficiently allocated. Who’s going to bring apple trees to Mars? Why have a spiral staircase with such a huge footprint? Still, if they could pull it off, this would allow for a lot of breathing room, something that will surely be of great value during a year or multi-year stay on the planet.

SEArch+/Apis Cor has carefully considered the positioning and shape of its design to maximize light and minimize radiation exposure. There are two independent pressurized areas — everyone likes redundancy — and it’s built using a sloped site, which may expand the possible locations. It looks a little claustrophobic, though.

Northwestern University has a design that aims for simplicity of construction: an inflatable vessel provides the base for the printer to create a simple dome with reinforcing cross-beams. This practical approach no doubt won them points, and the inside, while not exactly roomy, is also practical in its layout. As AI SpaceFactory pointed out, a dome isn’t really the best shape (lots of wasted space) but it is easy and strong. A couple of these connected at the ends wouldn’t be so bad.
The teams split a total of $100K for this phase, and are now moving on to the hard part: actually building these things. In spring of 2019 they’ll be expected to have a working custom 3D printer that can create a 1:3 scale model of their habitat. It’s difficult to say who will have the worst time of it, but I’m thinking Kahn-Yates (that holey structure will be a pain to print) and SEArch+/Apis (slope, complex eaves and structures).
The purse for the real-world construction is an eye-popping $2 million, so you can bet the competition will be fierce. In the meantime, seriously, watch those videos above, they’re really interesting.

Source: Gadgets – techcrunch

‘Underwater Pokéball’ snatches up soft-bodied deep dwellers

‘Underwater Pokéball’ snatches up soft-bodied deep dwellers
Creatures that live in the depths of the oceans are often extremely fragile, making their collection a difficult affair. A new polyhedral sample-collection mechanism acts like an “underwater Pokéball,” allowing scientists to catch ’em all without destroying their soft, squishy bodies in the process.
The ball is technically a dodecahedron that closes softly around the creature in front of it. It’s not exactly revolutionary, except in that it is extremely simple mechanically — at depths of thousands of feet, the importance of this can’t be overstated — and non-destructive.
Sampling is often done via a tube with moving caps on both ends into which the creature must be guided and trapped, or a vacuum tube that sucks it in, which as you can imagine is at best unpleasant for the target and at worst, lethal.
The rotary actuated dodecahedron, or RAD, has five 3D-printed “petals” with a complex-looking but mechanically simple framework that allows them to close up simultaneously from force applied at a single point near the rear panel.
“I was building microrobots by hand in graduate school, which was very painstaking and tedious work,” explained creator Zhi Ern Teoh, of Harvard’s Wyss Institute, “and I wondered if there was a way to fold a flat surface into a three-dimensional shape using a motor instead.”
The answer is yes, obviously, since he made it; the details are published in Science Robotics. Inspired by origami and papercraft, Teoh and his colleagues applied their design knowledge to creating not just a fold-up polyhedron (you can cut one out of any sheet of paper) but a mechanism that would perform that folding process in one smooth movement. The result is the network of hinged arms around the polyhedron tuned to push lightly and evenly and seal it up.
In testing, the RAD successfully captured some moon jellies in a pool, then at around 2,000 feet below the ocean surface was able to snag squid, octopus and wild jellies and release them again with no harm done. They didn’t capture the octopus on camera, but apparently it was curious about the device.
Because of the RAD’s design, it would work just as well miles below the surface, the researchers said, though they haven’t had a chance to test that yet.
“The RAD sampler design is perfect for the difficult environment of the deep ocean because its controls are very simple, so there are fewer elements that can break,” Teoh said.
There’s also no barrier to building a larger one, or a similar device that would work in space, he pointed out. As for current applications like sampling of ocean creatures, the setup could easily be enhanced with cameras and other tools or sensors.
“In the future, we can capture an animal, collect lots of data about it like its size, material properties, and even its genome, and then let it go,” said co-author David Gruber, from CUNY. “Almost like an underwater alien abduction.”

Source: Gadgets – techcrunch

Watch Rocket Lab’s first commercial launch, ‘It’s Business Time’ [Update: Postponed]

Watch Rocket Lab’s first commercial launch, ‘It’s Business Time’ [Update: Postponed]
Rocket Lab, the New Zealand-based rocket company that is looking to further amplify the commercial space frenzy, is launching its first fully paid payload atop an Electron rocket tonight — technically tomorrow morning at the launch site. If successful, it will mark a significant new development in the highly competitive world of commercial launches.
Update: Sorry folks but not today. The company said it will announce a new target soon, while the launch window remains July 6.
Liftoff is planned for 2:10 in the morning local time in New Zealand, or 7:10 Pacific time in the U.S.; the live stream will start about 20 minutes before that.

The Electron rocket is a far smaller one than the Falcon 9s we see so frequently these days, with a nominal payload of 150 kilograms, just a fraction of the many tons that we see sent up by SpaceX. But that’s the whole point, Rocket Lab’s founder, CEO and chief engineer Peter Beck told me recently.
“You can go buy a spot on a big launch vehicle, but they’re not very frequent. With a small rocket you can choose your orbit and choose your schedule,” he said. “That’s what we’re driving at here: regular and reliable access to space.”
An Electron rocket launching during a previous test
Just like not every car on the road has to be a big rig, not every rocket needs to be a Saturn V; 150 kilos is more than enough to fill with paying customers and cover the cost of launch. And Beck told me there is no shortage whatsoever of paying customers.
“The most important part of the mission is the timing in which we manifested it,” he explained (manifesting meaning having a payload added to the manifest). “We went from nothing manifested to a full payload in about 12 weeks.”
For comparison, some missions or payloads will wait literally years before there’s an opportunity to get to the orbit they need. Loading up just a few weeks ahead of time is unusual, to say the least.
Today’s launch will carry satellites from Spire, Tyvak/GeoOptics, students at UC Irvine and High Performance Space Structure Systems; you can see the specifics of these on the manifest (PDF). It’s not the first time an Electron has taken a paid payload to orbit, but it is the first fully commercialized launch.
Rocket Lab has no ambitions for interplanetary travel, sending people to space or anything like that. It just wants to take 150 kilograms to orbit as often as it can, as inexpensively as it can.
“We’re not interested in building a bigger rocket, we’re interested in building more of this one,” Beck said. “The vehicle is fully dialed in; we started from day one with this vehicle designed from a production approach. We’re fully vertically integrated, we don’t have any contractors, we do everything in-house. We’ve been scaling up the factories enormously.”
“We’re looking for a one-a-month cadence this year, then next year one every two weeks,” he continued. “Frequency is the key — it’s the choke point in space right now.”
Ultimately the plan is to get a rocket lifting off every few days. And if you think that will be enough to meet demand, just wait a couple years.

Source: Gadgets – techcrunch

This smart prosthetic ankle adjusts to rough terrain

This smart prosthetic ankle adjusts to rough terrain
Prosthetic limbs are getting better and more personalized, but useful as they are, they’re still a far cry from the real thing. This new prosthetic ankle is a little closer than others, though: it moves on its own, adapting to its user’s gait and the surface on which it lands.
Your ankle does a lot of work when you walk: lifting your toe out of the way so you don’t scuff it on the ground, controlling the tilt of your foot to minimize the shock when it lands or as you adjust your weight, all while conforming to bumps and other irregularities it encounters. Few prostheses attempt to replicate these motions, meaning all that work is done in a more basic way, like the bending of a spring or compression of padding.
But this prototype ankle from Michael Goldfarb, a mechanical engineering professor at Vanderbilt, goes much further than passive shock absorption. Inside the joint are a motor and actuator, controlled by a chip that senses and classifies motion and determines how each step should look.

Po 3D prints personalized prosthetic hands for the needy in South America

“This device first and foremost adapts to what’s around it,” Goldfarb said in a video documenting the prosthesis.
“You can walk up slopes, down slopes, up stairs and down stairs, and the device figures out what you’re doing and functions the way it should,” he added in a news release from the university.
When it senses that the foot has lifted up for a step, it can lift the toe up to keep it clear, also exposing the heel so that when the limb comes down, it can roll into the next step. And by reading the pressure both from above (indicating how the person is using that foot) and below (indicating the slope and irregularities of the surface) it can make that step feel much more like a natural one.

One veteran of many prostheses, Mike Sasser, tested the device and had good things to say: “I’ve tried hydraulic ankles that had no sort of microprocessors, and they’ve been clunky, heavy and unforgiving for an active person. This isn’t that.”
Right now the device is still very lab-bound, and it runs on wired power — not exactly convenient if someone wants to go for a walk. But if the joint works as designed, as it certainly seems to, then powering it is a secondary issue. The plan is to commercialize the prosthesis in the next couple of years once all that is figured out. You can learn a bit more about Goldfarb’s research at the Center for Intelligent Mechatronics.

Source: Gadgets – techcrunch