Inspired by spiders and wasps, these tiny drones pull 40x their own weight

Inspired by spiders and wasps, these tiny drones pull 40x their own weight
If we want drones to do our dirty work for us, they’re going to need to get pretty good at hauling stuff around. But due to the pesky yet unavoidable restraints of physics, it’s hard for them to muster the forces necessary to do so while airborne — so these drones brace themselves against the ground to get the requisite torque.
The drones, created by engineers at Stanford and Switzerland’s EPFL, were inspired by wasps and spiders that need to drag prey from place to place but can’t actually lift it, so they drag it instead. Grippy feet and strong threads or jaws let them pull objects many times their weight along the ground, just as you might slide a dresser along rather than pick it up and put it down again. So I guess it could have also just been inspired by that.

Swarms of tiny, cute robots will one day bring you your phone, like this

Whatever the inspiration, these “FlyCroTugs” (a combination of flying, micro and tug presumably) act like ordinary tiny drones while in the air, able to move freely about and land wherever they need to. But they’re equipped with three critical components: an anchor to attach to objects, a winch to pull on that anchor and sticky feet to provide sure grip while doing so.
“By combining the aerodynamic forces of our vehicle and the interactive forces generated by the attachment mechanisms, we were able to come up with something that is very mobile, very strong and very small,” said Stanford grad student Matthew Estrada, lead author of the paper published in Science Robotics.

The idea is that one or several of these ~100-gram drones could attach their anchors to something they need to move, be it a lever or a piece of trash. Then they take off and land nearby, spooling out thread as they do so. Once they’re back on terra firma they activate their winches, pulling the object along the ground — or up over obstacles that would have been impossible to navigate with tiny wheels or feet.
Using this technique — assuming they can get a solid grip on whatever surface they land on — the drones are capable of moving objects 40 times their weight — for a 100-gram drone like that shown, that would be about 4 kilograms, or nearly 9 pounds. Not quickly, but that may not always be a necessity. What if a handful of these things flew around the house when you were gone, picking up bits of trash or moving mail into piles? They would have hours to do it.
As you can see in the video below, they can even team up to do things like open doors.

“People tend to think of drones as machines that fly and observe the world,” said co-author of the paper, EPFL’s Dario Floreano, in a news release. “But flying insects do many other things, such as walking, climbing, grasping and building. Social insects can even work together and combine their strength. Through our research, we show that small drones are capable of anchoring themselves to surfaces around them and cooperating with fellow drones. This enables them to perform tasks typically assigned to humanoid robots or much larger machines.”
Unless you’re prepared to wait for humanoid robots to take on tasks like this (and it may be a decade or two), you may have to settle for drone swarms in the meantime.

Source: Gadgets – techcrunch

The future of photography is code

The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

See the new iPhone’s ‘focus pixels’ up close

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

A system to tell good fake bokeh from bad

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

Source: Gadgets – techcrunch

Accion Systems takes on $3M in Boeing-led round to advance its tiny satellite thrusters

Accion Systems takes on M in Boeing-led round to advance its tiny satellite thrusters
Accion Systems, the startup aiming to reinvent satellite propulsion with an innovative and tiny new thruster, has attracted significant investment from Boeing’s HorizonX Ventures. The $3 million round should give the company a bit of breathing room while it continues to prove and improve its technology.
“Investing in startups with next-generation concepts accelerates satellite innovation, unlocking new possibilities and economics in Earth orbit and deep space,” said HorizonX Ventures managing director Brian Schettler in a press release.
Accion, whose founder and CEO Natalya Bailey graced the stage of Disrupt just a few weeks ago, makes what’s called a “tiled ionic liquid electrospray” propulsion system, or TILE. This system is highly efficient and can be made the size of a postage stamp or much larger depending on the requirements of the satellite.
Example of a TILE attached to a satellite chassis.
The company has tested its tech in terrestrial facilities and in space, but it hasn’t been used for any missions just yet — though that may change soon. A pair of student-engineered cubesats equipped with TILE thrusters are scheduled to take off on RocketLab’s first big commercial payload launch, “It’s Business Time.” It’s been delayed a few times but early November is the next launch window, so everyone cross your fingers.
Another launch scheduled for November is the IRVINE 02 cubesat, which will sport TILEs and go up aboard a Falcon 9 loaded with supplies for the International Space Station.
The Boeing investment (Gettylab also participated in the round) doesn’t include any guarantees like equipping Boeing-built satellites with the thrusters. But the company is certainly already dedicated to this type of tech and the arrangement is characterized as a partnership — so it’s definitely a possibility.
Natalya Bailey and Rob Coneybeer (Shasta Ventures) at Disrupt Berlin 2017.
A Boeing representative told me that this is aimed to help Accion scale, and that the latter will have access to the former’s testing facilities and expertise. “We believe there will be many applications for Accion’s propulsion system, and will be monitoring and assessing the tech as it continues to mature,” they wrote in an email.
I asked Accion what the new funding will be directed towards, but a representative only indicated that it would be used for the usual things: research, operations, staff expenses, and so on. Not some big skunk works project, then. The company’s last big round was in 2016, when it raised $7.5 million.

Source: Gadgets – techcrunch

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely
Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.
“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”
Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.
The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.
The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle
The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.
Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.
The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.
Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”
Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Source: Gadgets – techcrunch

VR optics could help old folks keep the world in focus

VR optics could help old folks keep the world in focus
The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.
I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.
There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?
That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.
Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.
This is an old prototype, but you get the idea.
It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.
Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.
In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.
The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.
“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”
The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

Source: Gadgets – techcrunch

NASA’s Parker Solar Probe launches to ‘touch the sun’

NASA’s Parker Solar Probe launches to ‘touch the sun’
Update: Launch successful!
NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:53 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.
If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.
This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly.
(Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.
It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.
Go on – it’s quite cool.
The car-sized Parker will orbit the sun and constantly rotate itself so the heat shield is facing inward and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.
And such instruments! There are three major experiments or instrument sets on the probe.
WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation and other nuisances will produce an amazingly clear picture.
SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.
FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.
They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.
Even then, they’ll get so hot that the team needed to implement the first-ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.
The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that seven more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.
On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.
It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.
The mission is scheduled to last seven years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.
The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.

Source: Gadgets – techcrunch

This 3D-printed camp stove is extra-efficient and wind-resistant

This 3D-printed camp stove is extra-efficient and wind-resistant
I love camping, but there’s always an awkward period when you’ve left the tent but haven’t yet created coffee that I hate camping. It’s hard not to watch the pot not boil and not want to just go back to bed, but since the warm air escaped when I opened the tent it’s pointless! Anyway, the Swiss figured out a great way to boil water faster, and I want one of these sweet stoves now.
The PeakBoil stove comes from design students at ETH Zurich, who have clearly faced the same problems as myself. But since they actually camp in inclement weather, they also have to deal with wind blowing out the feeble flame of an ordinary gas burner.
Their attempt to improve on the design takes the controversial step of essentially installing a stovepipe inside the vessel and heating it from the inside out rather than from the bottom up. This has been used in lots of other situations to heat water but it’s the first time I’ve seen it in a camp stove.
By carefully configuring the gas nozzles and adding ripples to the wall of the heat pipe, PeakBoil “increases the contact area between the flame and the jug,” explained doctoral student and project leader Julian Ferchow in an ETH Zurich news release.
“That, plus the fact that the wall is very thin, makes heat transfer to the contents of the jug ideal,” added his colleague Patrick Beutler.

Keeping the flames isolated inside the chimney behind baffles minimizes wind interference with the flames, and prevents you having to burn extra gas to keep it alive.
The design was created using a selective laser melting or sintering process, in which metal powder is melted in a pattern much like a 3D printer lays down heated plastic. It’s really just another form of additive manufacturing, and it gave the students “a huge amount of design freedom…with metal casting, for instance, we could never achieve channels that are as thin as the ones inside our gas burner,” Ferchow said.
Of course, the design means it’s pretty much only usable for boiling water (you wouldn’t want to balance a pan on top of it), but that’s such a common and specific use case that many campers already have a stove dedicated to the purpose.
The team is looking to further improve the design and also find an industry partner with which to take it to market. MSR, GSI, REI… I’m looking at you. Together we can make my mornings bearable.

Source: Gadgets – techcrunch

NASA’s Open Source Rover lets you build your own planetary exploration platform

NASA’s Open Source Rover lets you build your own planetary exploration platform
Got some spare time this weekend? Why not build yourself a working rover from plans provided by NASA? The spaceniks at the Jet Propulsion Laboratory have all the plans, code, and materials for you to peruse and use — just make sure you’ve got $2,500 and a bit of engineering know-how. This thing isn’t made out of Lincoln Logs.
The story is this: after Curiosity landed on Mars, JPL wanted to create something a little smaller and less complex that it could use for educational purposes. ROV-E, as they called this new rover, traveled with JPL staff throughout the country.
Unsurprisingly, among the many questions asked was often whether a class or group could build one of their own. The answer, unfortunately, was no: though far less expensive and complex than a real Mars rover, ROV-E was still too expensive and complex to be a class project. So JPL engineers decided to build one that wasn’t.
The result is the JPL Open Source Rover, a set of plans that mimic the key components of Curiosity but are simpler and use off the shelf components.
“I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others,” said JPL’s Tom Soderstrom in a post announcing the OSR. “We wanted to give back to the community and lower the barrier of entry by giving hands on experience to the next generation of scientists, engineers, and programmers.”
The OSR uses Curiosity-like “Rocker-Bogie” suspension, corner steering and pivoting differential, allowing movement over rough terrain, and the brain is a Raspberry Pi. You can find all the parts in the usual supply catalogs and hardware stores, but you’ll also need a set of basic tools: a bandsaw to cut metal, a drill press is probably a good idea, a soldering iron, snips and wrenches, and so on.
“In our experience, this project takes no less than 200 person-hours to build, and depending on the familiarity and skill level of those involved could be significantly more,” the project’s creators write on the GitHub page.
So basically unless you’re literally rocket scientists, expect double that. Although JPL notes that they did work with schools to adjust the building process and instructions.
There’s flexibility built into the plans, too. So you can load custom apps, connect payloads and sensors to the brain, and modify the mechanics however you’d like. It’s open source, after all. Make it your own.
“We released this rover as a base model. We hope to see the community contribute improvements and additions, and we’re really excited to see what the community will add to it,” said project manager Mik Cox. “I would love to have had the opportunity to build this rover in high school, and I hope that through this project we provide that opportunity to others.”

Source: Gadgets – techcrunch

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors
Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.
Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

OpenAI’s ‘Dota 2’ neural nets are defeating human opponents

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.
The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.
The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)
In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.
They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.
The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.
What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.
This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.
As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.

Source: Gadgets – techcrunch

NASA’s 3D-printed Mars Habitat competition doles out prizes to concept habs

NASA’s 3D-printed Mars Habitat competition doles out prizes to concept habs
A multi-year NASA contest to design a 3D-printable Mars habitat using on-planet materials has just hit another milestone — and a handful of teams have taken home some cold, hard cash. This more laid-back phase had contestants designing their proposed habitat using architectural tools, with the five winners set to build scale models next year.
Technically this is the first phase of the third phase — the (actual) second phase took place last year and teams took home quite a bit of money.
The teams had to put together realistic 3D models of their proposed habitats, and not just in Blender or something. They used Building Information Modeling software that would require these things to be functional structures designed down to a particular level of detail — so you can’t just have 2D walls made of “material TBD,” and you have to take into account thickness from pressure sealing, air filtering elements, heating, etc.
The habitats had to have at least a thousand square feet of space, enough for four people to live for a year, along with room for the machinery and paraphernalia associated with, you know, living on Mars. They must be largely assembled autonomously, at least enough that humans can occupy them as soon as they land. They were judged on completeness, layout, 3D-printing viability and aesthetics.

So although the images you see here look rather sci-fi, keep in mind they were also designed using industrial tools and vetted by experts with “a broad range of experience from Disney to NASA.” These are going to Mars, not paperback. And they’ll have to be built in miniature for real next year, so they better be realistic.
The five winning designs embody a variety of approaches. Honestly all these videos are worth a watch; you’ll probably learn something cool, and they really give an idea of how much thought goes into these designs.

Zopherus has the whole print taking place inside the body of a large lander, which brings its own high-strength printing mix to reinforce the “Martian concrete” that will make up the bulk of the structure. When it’s done printing and embedding the pre-built items like airlocks, it lifts itself up, moves over a few feet, and does it again, creating a series of small rooms. (They took first place and essentially tied the next team for take-home case, a little under $21K.)

AI SpaceFactory focuses on the basic shape of the vertical cylinder as both the most efficient use of space and also one of the most suitable for printing. They go deep on the accommodations for thermal expansion and insulation, but also have thought deeply about how to make the space safe, functional, and interesting. This one is definitely my favorite.

Kahn-Yates has a striking design, with a printed structural layer giving way to a high-strength plastic layer that lets the light in. Their design is extremely spacious but in my eyes not very efficiently allocated. Who’s going to bring apple trees to Mars? Why have a spiral staircase with such a huge footprint? Still, if they could pull it off, this would allow for a lot of breathing room, something that will surely be of great value during a year or multi-year stay on the planet.

SEArch+/Apis Cor has carefully considered the positioning and shape of its design to maximize light and minimize radiation exposure. There are two independent pressurized areas — everyone likes redundancy — and it’s built using a sloped site, which may expand the possible locations. It looks a little claustrophobic, though.

Northwestern University has a design that aims for simplicity of construction: an inflatable vessel provides the base for the printer to create a simple dome with reinforcing cross-beams. This practical approach no doubt won them points, and the inside, while not exactly roomy, is also practical in its layout. As AI SpaceFactory pointed out, a dome isn’t really the best shape (lots of wasted space) but it is easy and strong. A couple of these connected at the ends wouldn’t be so bad.
The teams split a total of $100K for this phase, and are now moving on to the hard part: actually building these things. In spring of 2019 they’ll be expected to have a working custom 3D printer that can create a 1:3 scale model of their habitat. It’s difficult to say who will have the worst time of it, but I’m thinking Kahn-Yates (that holey structure will be a pain to print) and SEArch+/Apis (slope, complex eaves and structures).
The purse for the real-world construction is an eye-popping $2 million, so you can bet the competition will be fierce. In the meantime, seriously, watch those videos above, they’re really interesting.

Source: Gadgets – techcrunch