Fleksy’s keyboard grabs $800k+ via equity crowdfunding

Fleksy’s keyboard grabs 0k+ via equity crowdfunding

The dev team that’s now engineering the Fleksy keyboard app has raised more than $800,000 via an equity crowdfunding route.

As we reported a year ago, the development of Fleksy’s keyboard has been taken over by the Barcelona-based startup behind an earlier keyboard app called ThingThing.

The team says their new funding raise — described as a pre-Series A round — will be put towards continued product development of the Fleksy keyboard, including the core AI engine used for next word and content prediction, plus additional features being requested by users — such as swipe to type. 

Support for more languages is also planned. (Fleksy’s Android and iOS apps are currently available in 45+ languages.)

Their other big push will be for growth: Scaling the user-base via a licensing route to market in which the team pitches Android OEMs on the benefits of baking Fleksy in as the default keyboard — offering a high degree of customization, alongside a feature-set that boasts not just speedy typing but apps within apps and extensions. 

The Fleksy keyboard can offer direct access to web search within the keyboard, for example, as well as access to third party apps (in an apps within apps play) — to reduce the need for full app switching.

This was the original concept behind ThingThing’s eponymous keyboard app, though the team has refocused efforts on Fleksy. And bagged their first OEMs as licensing partners.

They’ve just revealed Palm as an early partner. The veteran brand unveiled a dinky palm-sized ‘ultra-mobile’ last week. The tiny extra detail is that the device runs a custom version of the Fleksy keyboard out of the box.

With just 3.3 inches of screen to play with, the keyboard on the Palm risks being a source of stressful friction. Ergo enter Fleksy, with gesture based tricks to speed up cramped typing, plus tried and tested next-word prediction.

ThingThing CEO Olivier Plante says Palm was looking for an “out of the box optimized input method” — and more than that “high customization”.

“We’re excited to team up with ThingThing to design a custom keyboard that delivers a full keyboard typing experience for Palm’s ultra mobile form factor,” adds Dennis Miloseski, co-founder of Palm, in a statement. “Fleksy enables gestures and voice-to-text which makes typing simple and convenient for our users on the go.”

Plante says Fleksy has more OEM partnerships up its sleeve too. “We’re pending to announce new partnerships very soon and grow our user base to more than 25 million users while bringing more revenue to the medium and small OEMs desperately looking to increase their profit margins — software is the cure,” he tells TechCrunch.

ThingThing is pitching itself as a neutral player in the keyboard space, offering OEMs a highly tweakable layer where the Qwerty sits as its strategy to compete with Android’s keyboard giants: Google’s Gboard and Microsoft-owned SwiftKey. 

“We changed a lot of things in Fleksy so it feels native,” says Plante, discussing the Palm integration. “We love when the keyboard feels like the brand and with Palm it’s completely a Palm keyboard to the end-user — and with stellar performance on a small screen.”

“We’ve beaten our competitor to the punch,” he adds. 

That said, the tiny Palm (pictured in the feature image at the top of this post) is unlikely to pack much of a punch in marketshare terms. While Palm is a veteran — and, to nerds, almost cult — brand it’s not even a mobile tiddler in smartphone marketshare terms.

Palm’s cute micro phone is also an experimental attempt to create a new mobile device category — a sort of netbook-esque concept of an extra mobile that’s extra portable — which looks unlikely to be anything other than extremely niche. (Added to its petite size, the Palm is a Verizon exclusive.)

Even so ThingThing is talking bullishly of targeting 550M devices using its keyboard by 2020.

At this stage its user-base from pure downloads is also niche: Just over 1M active users. But Plante says it has already closed “several phone brands partnerships” — saying three are signed, with three more in the works — claiming this will make Fleksy the default input method in more than 20-30 million active users in the coming months. 

He doesn’t name any names but describes these other partners as “other major phone brands”.

The plan to grow Fleksy’s user-base via licensing has attracted wider investor backing now, via the equity crowdfunding route. The team had initially been targeting ($300k). In all they’ve secured $815,119 from 446 investors.

Plante says they went down the equity crowdfunding route to spread their pitch more widely, and get more ambassadors on board — as well as to demonstrate “that we’re a user-centric/people/independent company aiming big”.

“We are keen to work and fully customize the keyboard to the OEM tastes. We know this is key for them so they can better compete against the others on more than simply the hardware,” he says, making the ‘Fleksy for OEMs’ pitch. “Today, the market is saturated with yet another box, better camera and better screen…. the missing piece in Android ecosystem is software differences.”

Given how tight margins remain for Android makers it remains to be seen how many will bite. Though there’s a revenue share arrangement that sweetens the deal.

It is also certainly true that differentiation in the Android space is a big problem. That’s why Palm is trying its hand at a smaller form factor — in a leftfield attempt to stand out by going small.

The European Union’s recent antitrust ruling against Google’s Android OS has also opened up an opportunity for additional software customization, via unbundled Google apps. So there’s at least a chance for some new thinking and ideas to emerge in the regional Android smartphone space. And that could be good for Spain-based ThingThing.

Aside from the licensing fee, the team’s business model relies on generating revenue via affiliate links and its fleksyapps platform. ThingThing then shares revenue with OEM partners, so that’s another carrot for them — offering a services topper on their hardware margin.

Though that piece will need scale to really spin up. Hence ThingThing’s user target for Fleksy being so big and bold.

“We’re working with brands in order to bring them into any apps where you type, which unlocks brand new use cases and enables the user to share conveniently and the brand to drive mobile traffic to their service,” says Plante. “On this note, we monetize via affiliate/deep linking and operating a fleksyapps Store.”

ThingThing has also made privacy by design a major focus — which is a key way it’s hoping to make the keyboard app stand out against data-mining big tech rivals.

Source: Mobile – Techcruch

Xiaomi opts for sliding camera and no notch for new bezel-less Mi Mix phone

Xiaomi opts for sliding camera and no notch for new bezel-less Mi Mix phone

Xiaomi has announced the newest version of its bezel-less Mi Mix family, and it doesn’t sport a notch like its Mi 8 flagship. Indeed, unlike the Mi 8 — which I called one of Xiaomi’s most brazen Apple clones — there’s a lot more to get excited about.

The Mi Mix 3 was unveiled at an event in Beijing and, like its predecessor, Xiaomi boasts that it offers a full front screen. Rather than opting for the near-industry standard notch, Xiaomi has developed a slider that houses its front-facing camera. Vivo and Oppo have done similar using a motorized approach, but Xiaomi’s is magnetic while it can also be programmed for functions such as answering calls.

That array gives it a claimed 93.4 percent screen-to-body ratio and a full 6.4-inch 1080p AMOLED display. The slider, by the way, is good for 300,000 cycles, according to Xiaomi’s lab testing.

The device itself follows the much-lauded Mi Mix aesthetic with a Snapdragon 845 processor and up to 10GB in RAM (!) in the highest-end model. Xiaomi puts plenty of emphasis on cameras. The Mi Mix 3 includes four of them: a 24-megapixel front camera paired with a two-megapixel sensor and on the back, like the Mi 8, a dual camera array with two 12-megapixel cameras.

Xiaomi has also snuck an ‘AI button’ on the left side of the phone, a first for the company. That awakens its Xiao Ai voice assistant, but since it only supports Chinese don’t expect to see that on worldwide models.

The 10GB version — made in partnership with Palace Museum, located at the Forbidden City where the device was launched — also packs 256GB of onboard storage and is priced at RMB 4,999, or $720. That’s in addition to a ceramic design that Xiaomi says is inspired by the museum… better that than a fruity-sounding U.S. company.

That’s the special model, and the more affordable options include 6GB + 128GB for RMB 3,299 ($475), 8GB +128G for RMB 3,599 ($520) and 8GB + 256GB for RMB 3,999 ($575). The company also plans to introduce a 5G version in Europe sometime early next year.

Xiaomi said the phones will go on sale in China from 1 November, there’s no word on international availability or pricing right now.

Source: Mobile – Techcruch

Mobvoi launches new $200 smartwatch and $130 AirPods alternative

Mobvoi launches new 0 smartwatch and 0 AirPods alternative
Chinese AI company Mobvoi has consistently been one of the best also-rans in the smartwatch game, which remains dominated by Apple. Today, it launched a sequel to its 2016 TicWatch, which was a viral hit raising over $2 million on Kickstarter, and it unveiled a cheaper take on Apple’s AirPods.
The new TicWatch C2 was outed at a London event and is priced at $199.99. Unlike its predecessor, it has shifted from Mobvoi’s own OS to Google’s Wear OS. That isn’t a huge surprise, though, since Mobvoi’s newer budget watches and ‘pro’ watch have both already made that jump.
The C2 — which stands for classic 2 — packs NFC, Bluetooth, NFC and a voice assistant. It comes in black, platinum and rose gold. The latter color option — shown below — is thinner so presumably it is designed for female wrists.

However, there’s a compromise since the watch isn’t shipping with Qualcomm’s newest Snapdragon Wear 3100 chip. Mobvoi has instead picked the older 2100 processor. That might explain the price, but it will mean that newer Android Wear watches shipping in the company months have better performance, particularly around battery life. As it stands, the TicWatch C2 claims a day-two life but the processor should be a consideration for would-be buyers.
Mobvoi also outed TicPods Free, its take on Apple’s wireless AirPods. They are priced at $129.99 and available in red, white and blue.
The earbuds already raised over $2.8 million from Indiegogo — Mobvoi typically uses crowdfunding to gather feedback and assess customer interest — and early reviews have been positive.

They work on Android and iOS and include support for Alex and Google Assistant. They also include gesture-based controls beyond the Apple-style taps for skipping music, etc. Battery life without the case, which doubles as a charger, is estimated at 18 hours, or four hours of listening time.
The TicPods are available to buy online now. The TicWatch C2 is up for pre-sale ahead of a “wide” launch that’s planned for December 6.
Mobvoi specializes in AI and it includes Google among its investors. It also has a joint venture with VW that is focused on bringing Ai into the automotive industry. In China it is best known for AI services but globally, in the consumer space, it also offers a Google Assistant speaker called TicHome Mini.

Source: Gadgets – techcrunch

The future of photography is code

The future of photography is code
What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.
The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.
Not enough buckets
An image sensor one might find in a digital camera
The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.
But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.
Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

See the new iPhone’s ‘focus pixels’ up close

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.
The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?
In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.
Isn’t all photography computational?
The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.
For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.
These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.
In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.
The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.
Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.
Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.
Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.
All competition therefore comprises what these companies build on top of that foundation.
Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.
A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.
To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.
Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.
Access to the stream allows the camera to do all kinds of things. It adds context.
Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.
A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.
This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.
These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.
What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

A system to tell good fake bokeh from bad

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.
But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.
Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.
Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.
If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.
Double vision
One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.
This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.
A mock-up of what a line of color iPhones could look like
Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.
These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.
The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.
So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.
Light and code
The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.
Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.
What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

Source: Gadgets – techcrunch

Google’s smart home sell looks cluttered and incoherent

Google’s smart home sell looks cluttered and incoherent
If any aliens or technology ingenues were trying to understand what on earth a ‘smart home’ is yesterday, via Google’s latest own-brand hardware launch event, they’d have come away with a pretty confused and incoherent picture.
The company’s presenters attempted to sketch a vision of gadget-enabled domestic bliss but the effect was rather closer to described clutter-bordering-on-chaos, with existing connected devices being blamed (by Google) for causing homeowners’ device usability and control headaches — which thus necessitated another new type of ‘hub’ device which was now being unveiled, slated and priced to fix problems of the smart home’s own making.
Meet the ‘Made by Google’ Home Hub.
Buy into the smart home, the smart consumer might think, and you’re going to be stuck shelling out again and again — just to keep on top of managing an ever-expanding gaggle of high maintenance devices.
Which does sound quite a lot like throwing good money after bad. Unless you’re a true believer in the concept of gadget-enabled push-button convenience — and the perpetually dangled claim that smart home nirvana really is just around the corner. One additional device at a time. Er, and thanks to AI!
Yesterday, at Google’s event, there didn’t seem to be any danger of nirvana though.
Not unless paying $150 for a small screen lodged inside a speaker is your idea of heaven. (i.e. after you’ve shelled out for all the other connected devices that will form the spokes chained to this control screen.)
A small tablet that, let us be clear, is defined by its limitations: No standard web browser, no camera… No, it’s not supposed to be an entertainment device in its own right.
It’s literally just supposed to sit there and be a visual control panel — with the usual also-accessible-on-any-connected-device type of content like traffic, weather and recipes. So $150 for a remote control doesn’t sound quite so cheap now does it?
The hub doubling as a digital photo frame when not in active use — which Google made much of — isn’t some kind of ‘magic pixie’ sales dust either. Call it screensaver 2.0.
A fridge also does much the same with a few magnets and bits of paper. Just add your own imagination.
During the presentation, Google made a point of stressing that the ‘evolving’ smart home it was showing wasn’t just about iterating on the hardware front — claiming its Google’s AI software is hard at work in the background, hand-in-glove with all these devices, to really ‘drive the vision forward’.
But if the best example it can find to talk up is AI auto-picking which photos to display on a digital photo frame — at the same time as asking consumers to shell out $150 for a discrete control hub to manually manage all this IoT — that seems, well, underwhelming to say the least. If not downright contradictory.
Google also made a point of referencing concerns it said it’s heard from a large majority of users that they’re feeling overwhelmed by too much technology, saying: “We want to make sure you’re in control of your digital well-being.”
Yet it said this at an event where it literally unboxed yet another clutch of connected, demanding, function-duplicating devices — that are also still, let’s be clear, just as hungry for your data — including the aforementioned tablet-faced speaker (which Google somehow tried to claim would help people “disconnect” from all their smart home tech — so, basically, ‘buy this device so you can use devices less’… ); a ChromeOS tablet that transforms into a laptop via a snap-on keyboard; and 2x versions of its new high end smartphone, the Pixel 3.
There was even a wireless charging Pixel Stand that props the phone up in a hub-style control position. (Oh and Google didn’t even have time to mention it during the cluttered presentation but there’s this Disney co-branded Mickey Mouse-eared speaker for kids, presumably).
What’s the average consumer supposed to make of all this incestuously overlapping, wallet-badgering hardware?!
Smartphones at least have clarity of purpose — by being efficiently multi-purposed.
Increasingly powerful all-in-ones that let you do more with less and don’t even require you to buy a new one every year vs the smart home’s increasingly high maintenance and expensive (in money and attention terms) sprawl, duplication and clutter. And that’s without even considering the security risks and privacy nightmare.
The two technology concepts really couldn’t be further apart.
If you value both your time and your money the smartphone is the one — the only one — to buy into.
Whereas the smart home clearly needs A LOT of finessing — if it’s to ever live up to the hyped claims of ‘seamless convenience’.
Or, well, a total rebranding.
The ‘creatively chaotic & experimental gadget lovers’ home would be a more honest and realistic sell for now — and the foreseeable future.
Instead Google made a pitch for what it dubbed the “thoughtful home”. Even as it pushed a button to pull up a motorised pedestal on which stood clustered another bunch of charge-requiring electronics that no one really needs — in the hopes that consumers will nonetheless spend their time and money assimilating redundant devices into busy domestic routines. Or else find storage space in already overflowing drawers.
The various iterations of ‘smart’ in-home devices in the market illustrate exactly how experimental the entire  concept remains.
Just this week, Facebook waded in with a swivelling tablet stuck on a smart speaker topped with a camera which, frankly speaking, looks like something you’d find in a prison warden’s office.
Google, meanwhile, has housed speakers in all sorts of physical forms, quite a few of which resemble restroom scent dispensers — what could it be trying to distract people from noticing?
And Amazon now has so many Echo devices it’s almost impossible to keep up. It’s as if the ecommerce giant is just dropping stones down a well to see if it can make a splash.
During the smart home bits of Google’s own-brand hardware pitch, the company’s parade of presenters often sounded like they were going through robotic motions, failing to muster anything more than baseline enthusiasm.
And failing to dispel a strengthening sense that the smart home is almost pure marketing, and that sticking update-requiring, wired in and/or wireless devices with variously overlapping purposes all over the domestic place is the very last way to help technology-saturated consumers achieve anything close to ‘disconnected well-being’.
Incremental convenience might be possible, perhaps — depending on which and how few smart home devices you buy; for what specific purpose/s; and then likely only sporadically, until the next problematic update topples the careful interplay of kit and utility. But the idea that the smart home equals thoughtful domestic bliss for families seems farcical.
All this updatable hardware inevitably injects new responsibilities and complexities into home life, with the conjoined power to shift family dynamics and relationships — based on things like who has access to and control over devices (and any content generated); whose jobs it is to fix things and any problems caused when stuff inevitably goes wrong (e.g. a device breakdown OR an AI-generated snafu like the ‘wrong’ photo being auto-displayed in a communal area); and who will step up to own and resolve any disputes that arise as a result of all the Internet connected bits being increasingly intertwined in people’s lives, willingly or otherwise.
Hey Google, is there an AI to manage all that yet?

Source: Gadgets – techcrunch

Comparing Google Home Hub vs Amazon Echo Show 2 vs Facebook Portal

Comparing Google Home Hub vs Amazon Echo Show 2 vs Facebook Portal
The war for the countertop has begun. Google, Amazon and Facebook all revealed their new smart displays this month. Each hopes to become the center of your Internet of Things-equipped home and a window to your loved ones. The $149 Google Home Hub is a cheap and privacy-safe smart home controller. The $229 Amazon Echo Show 2 gives Alexa a visual complement. And the $199 Facebook Portal and $349 Portal+ offer a Smart Lens that automatically zooms in and out to keep you in frame while you video chat.
For consumers, the biggest questions to consider are how much you care about privacy, whether you really video chat, which smart home ecosystem you’re building around and how much you want to spend.

For the privacy obsessed, Google’s Home Hub is the only one without a camera and it’s dirt cheap at $149.
For the privacy agnostic, Facebook’s Portal+ offers the best screen and video chat functionality.
For the chatty, Amazon Echo Show 2 can do message and video chat over Alexa, call phone numbers and is adding Skype.

If you want to go off-brand, there’s also the Lenovo Smart Display, with stylish hardware in a $249 10-inch 1080p version and a $199 8-inch 720p version. And for the audiophile, there’s the $199 JBL Link View. While those hit the market earlier than the platform-owned versions we’re reviewing here, they’re not likely to benefit from the constant iteration Google, Amazon and Facebook are working on for their tabletop screens.
Here’s a comparison of the top smart displays, including their hardware specs, unique software, killer features and pros and cons:

Source: Gadgets – techcrunch

Google Lens comes to the Pixel 3 camera, can identify products

Google Lens comes to the Pixel 3 camera, can identify products

Google Lens, the technology that combines the smartphone camera’s ability to see the world around you with A.I. technology, is coming to the Pixel 3 camera, Google announced this morning. That means you’ll be able to point your phone’s camera at something – like a movie poster to get local theater listings, or even look up at actor’s bio, or to translate a sign in another language – and see results right in the camera app itself.

The integration is thanks to Google’s investment in A.I. technologies, something that was the underlying tie to everything Google announced today at its hardware event.

Lens, in particular, was first shown off at Google I/O back in 2017, before rolling out to new products like Google Image Search just weeks ago.

The feature has also been inside the camera apps of older Pixel devices as well as those from other manufacturers, including LG, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, and Asus. But Google touted Lens today as one of the new Pixel 3 camera’s big features.

With Lens, you can point your camera at a takeout menu, Google says, and it will highlight the number to call.

Another feature is centered around shopping. With a long press, you can have Lens identify a product the camera sees in the viewfinder, and have it match it to real products. This is called “Style Search,” Google says.

As Google explained at the event, you can point your Pixel 3 camera at a friend’s cool new pair of sunglasses or some shoes you like in a magazine, and Lens will point you to where you can find them online and browse similar styles. The feature is similar to Pinterest’s visual search, which has been available for some time.

Style Search has been available in Lens as of earlier this year.

Also of note, Lens will be able to take some of its more common actions instantly in the camera, without the need for a data connection.

Google says this is possible by combining Pixel’s visual core with its years of work in search and computer vision.

“Being able to search the world around you is the next logical step and organizing the world’s information and making it more useful for people,” said Brian Rakowski, VP Product Management at Google.

more Google Event 2018 coverage

Source: Mobile – Techcruch

Happy 10th anniversary, Android

Happy 10th anniversary, Android

It’s been 10 years since Google took the wraps off the G1, the first Android phone. Since that time the OS has grown from buggy, nerdy iPhone alternative to arguably the most popular (or at least populous) computing platform in the world. But it sure as heck didn’t get there without hitting a few bumps along the road.

Join us for a brief retrospective on the last decade of Android devices: the good, the bad, and the Nexus Q.

HTC G1 (2008)

This is the one that started it all, and I have a soft spot in my heart for the old thing. Also known as the HTC Dream — this was back when we had an HTC, you see — the G1 was about as inauspicious a debut as you can imagine. Its full keyboard, trackball, slightly janky slide-up screen (crooked even in official photos), and considerable girth marked it from the outset as a phone only a real geek could love. Compared to the iPhone, it was like a poorly dressed whale.

But in time its half-baked software matured and its idiosyncrasies became apparent for the smart touches they were. To this day I occasionally long for a trackball or full keyboard, and while the G1 wasn’t pretty, it was tough as hell.

Moto Droid (2009)

Of course, most people didn’t give Android a second look until Moto came out with the Droid, a slicker, thinner device from the maker of the famed RAZR. In retrospect, the Droid wasn’t that much better or different than the G1, but it was thinner, had a better screen, and had the benefit of an enormous marketing push from Motorola and Verizon. (Disclosure: Verizon owns Oath, which owns TechCrunch, but this doesn’t affect our coverage in any way.)

For many, the Droid and its immediate descendants were the first Android phones they had — something new and interesting that blew the likes of Palm out of the water, but also happened to be a lot cheaper than an iPhone.

HTC/Google Nexus One (2010)

This was the fruit of the continued collaboration between Google and HTC, and the first phone Google branded and sold itself. The Nexus One was meant to be the slick, high-quality device that would finally compete toe-to-toe with the iPhone. It ditched the keyboard, got a cool new OLED screen, and had a lovely smooth design. Unfortunately it ran into two problems.

First, the Android ecosystem was beginning to get crowded. People had lots of choices and could pick up phones for cheap that would do the basics. Why lay the cash out for a fancy new one? And second, Apple would shortly release the iPhone 4, which — and I was an Android fanboy at the time — objectively blew the Nexus One and everything else out of the water. Apple had brought a gun to a knife fight.

HTC Evo 4G (2010)

Another HTC? Well, this was prime time for the now-defunct company. They were taking risks no one else would, and the Evo 4G was no exception. It was, for the time, huge: the iPhone had a 3.5-inch screen, and most Android devices weren’t much bigger, if they weren’t smaller.

The Evo 4G somehow survived our criticism (our alarm now seems extremely quaint, given the size of the average phone now) and was a reasonably popular phone, but ultimately is notable not for breaking sales records but breaking the seal on the idea that a phone could be big and still make sense. (Honorable mention goes to the Droid X.)

Samsung Galaxy S (2010)

Samsung’s big debut made a hell of a splash, with custom versions of the phone appearing in the stores of practically every carrier, each with their own name and design: the AT&T Captivate, T-Mobile Vibrant, Verizon Fascinate, and Sprint Epic 4G. As if the Android lineup wasn’t confusing enough already at the time!

Though the S was a solid phone, it wasn’t without its flaws, and the iPhone 4 made for very tough competition. But strong sales reinforced Samsung’s commitment to the platform, and the Galaxy series is still going strong today.

Motorola Xoom (2011)

This was an era in which Android devices were responding to Apple, and not vice versa as we find today. So it’s no surprise that hot on the heels of the original iPad we found Google pushing a tablet-focused version of Android with its partner Motorola, which volunteered to be the guinea pig with its short-lived Xoom tablet.

Although there are still Android tablets on sale today, the Xoom represented a dead end in development — an attempt to carve a piece out of a market Apple had essentially invented and soon dominated. Android tablets from Motorola, HTC, Samsung and others were rarely anything more than adequate, though they sold well enough for a while. This illustrated the impossibility of “leading from behind” and prompted device makers to specialize rather than participate in a commodity hardware melee.

Amazon Kindle Fire (2011)

And who better to illustrate than Amazon? Its contribution to the Android world was the Fire series of tablets, which differentiated themselves from the rest by being extremely cheap and directly focused on consuming digital media. Just $200 at launch and far less later, the Fire devices catered to the regular Amazon customer whose kids were pestering them about getting a tablet on which to play Fruit Ninja or Angry Birds, but who didn’t want to shell out for an iPad.

Turns out this was a wise strategy, and of course one Amazon was uniquely positioned to do with its huge presence in online retail and the ability to subsidize the price out of the reach of competition. Fire tablets were never particularly good, but they were good enough, and for the price you paid, that was kind of a miracle.

Xperia Play (2011)

Sony has always had a hard time with Android. Its Xperia line of phones for years were considered competent — I owned a few myself — and arguably industry-leading in the camera department. But no one bought them. And the one they bought the least of, or at least proportional to the hype it got, has to be the Xperia Play. This thing was supposed to be a mobile gaming platform, and the idea of a slide-out keyboard is great — but the whole thing basically cratered.

What Sony had illustrated was that you couldn’t just piggyback on the popularity and diversity of Android and launch whatever the hell you wanted. Phones didn’t sell themselves, and although the idea of playing Playstation games on your phone might have sounded cool to a few nerds, it was never going to be enough to make it a million-seller. And increasingly that’s what phones needed to be.

Samsung Galaxy Note (2012)

As a sort of natural climax to the swelling phone trend, Samsung went all out with the first true “phablet,” and despite groans of protest the phone not only sold well but became a staple of the Galaxy series. In fact, it wouldn’t be long before Apple would follow on and produce a Plus-sized phone of its own.

The Note also represented a step towards using a phone for serious productivity, not just everyday smartphone stuff. It wasn’t entirely successful — Android just wasn’t ready to be highly productive — but in retrospect it was forward thinking of Samsung to make a go at it and begin to establish productivity as a core competence of the Galaxy series.

Google Nexus Q (2012)

This abortive effort by Google to spread Android out into a platform was part of a number of ill-considered choices at the time. No one really knew, apparently at Google or anywhere elsewhere in the world, what this thing was supposed to do. I still don’t. As we wrote at the time:

Here’s the problem with the Nexus Q:  it’s a stunningly beautiful piece of hardware that’s being let down by the software that’s supposed to control it.

It was made, or rather nearly made in the USA, though, so it had that going for it.

HTC First — “The Facebook Phone” (2013)

The First got dealt a bad hand. The phone itself was a lovely piece of hardware with an understated design and bold colors that stuck out. But its default launcher, the doomed Facebook Home, was hopelessly bad.

How bad? Announced in April, discontinued in May. I remember visiting an AT&T store during that brief period and even then the staff had been instructed in how to disable Facebook’s launcher and reveal the perfectly good phone beneath. The good news was that there were so few of these phones sold new that the entire stock started selling for peanuts on Ebay and the like. I bought two and used them for my early experiments in ROMs. No regrets.

HTC One/M8 (2014)

This was the beginning of the end for HTC, but their last few years saw them update their design language to something that actually rivaled Apple. The One and its successors were good phones, though HTC oversold the “Ultrapixel” camera, which turned out to not be that good, let alone iPhone-beating.

As Samsung increasingly dominated, Sony plugged away, and LG and Chinese companies increasingly entered the fray, HTC was under assault and even a solid phone series like the One couldn’t compete. 2014 was a transition period with old manufacturers dying out and the dominant ones taking over, eventually leading to the market we have today.

Google/LG Nexus 5X and Huawei 6P (2015)

This was the line that brought Google into the hardware race in earnest. After the bungled Nexus Q launch, Google needed to come out swinging, and they did that by marrying their more pedestrian hardware with some software that truly zinged. Android 5 was a dream to use, Marshmallow had features that we loved … and the phones became objects that we adored.

We called the 6P “the crown jewel of Android devices”. This was when Google took its phones to the next level and never looked back.

Google Pixel (2016)

If the Nexus was, in earnest, the starting gun for Google’s entry into the hardware race, the Pixel line could be its victory lap. It’s an honest-to-god competitor to the Apple phone.

Gone are the days when Google is playing catch-up on features to Apple, instead, Google’s a contender in its own right. The phone’s camera is amazing. The software works relatively seamlessly (bring back guest mode!), and phone’s size and power are everything anyone could ask for. The sticker price, like Apple’s newest iPhones, is still a bit of a shock, but this phone is the teleological endpoint in the Android quest to rival its famous, fruitful, contender.

The rise and fall of the Essential phone

In 2017 Andy Rubin, the creator of Android, debuted the first fruits of his new hardware startup studio, Digital Playground, with the launch of Essential (and its first phone). The company had raised $300 million to bring the phone to market, and — as the first hardware device to come to market from Android’s creator — it was being heralded as the next new thing in hardware.

Here at TechCrunch, the phone received mixed reviews. Some on staff hailed the phone as the achievement of Essential’s stated vision — to create a “lovemark” for Android smartphones, while others on staff found the device… inessential.

Ultimately, the market seemed to agree. Four months ago plans for a second Essential phone were put on hold, while the company explored a sale and pursued other projects. There’s been little update since.

A Cambrian explosion in hardware

In the ten years since its launch, Android has become the most widely used operating system for hardware. Some version of its software can be found in roughly 2.3 billion devices around the world and its powering a technology revolution in countries like India and China — where mobile operating systems and access are the default. As it enters its second decade, there’s no sign that anything is going to slow its growth (or dominance) as the operating system for much of the world.

Let’s see what the next ten years bring.

Source: Mobile – Techcruch

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely

‘Jackrabbot 2’ takes to the sidewalks to learn how humans navigate politely
Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.
“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”
Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.
The first robot was put to work in 2016, and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that a new iteration was called for.
The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle
The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.
Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”

This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.
The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.
Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”
Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.

Source: Gadgets – techcrunch

The Google Assistant is now bilingual 

The Google Assistant is now bilingual 
The Google Assistant just got more useful for multilingual families. Starting today, you’ll be able to set up two languages in the Google Home app and the Assistant on your phone and Google Home will then happily react to your commands in both English and Spanish, for example.
Today’s announcement doesn’t exactly come as a surprise, given that Google announced at its I/O developer conference earlier this year that it was working on this feature. It’s nice to see that this year, Google is rolling out its I/O announcements well before next year’s event. That hasn’t always been the case in the past.
Currently, the Assistant is only bilingual and it still has a few languages to learn. But for the time being, you’ll be able to set up any language pair that includes English, German, French, Spanish, Italian and Japanese. More pairs are coming in the future and Google also says it is working on trilingual support, too.
Google tells me this feature will work with all Assistant surfaces that support the languages you have selected. That’s basically all phones and smart speakers with the Assistant, but not the new smart displays, as they only support English right now.

While this may sound like an easy feature to implement, Google notes this was a multi-year effort. To build a system like this, you have to be able to identify multiple languages, understand them and then make sure you present the right experience to the user. And you have to do all of this within a few seconds.
Google says its language identification model (LangID) can now distinguish between 2,000 language pairs. With that in place, the company’s researchers then had to build a system that could turn spoken queries into actionable results in all supported languages. “When the user stops speaking, the model has not only determined what language was being spoken, but also what was said,” Google’s VP Johan Schalkwyk and Google Speech engineer Lopez Moreno write in today’s announcement. “Of course, this process requires a sophisticated architecture that comes with an increased processing cost and the possibility of introducing unnecessary latency.”

If you are in Germany, France or the U.K., you’ll now also be able to use the bilingual assistant on a Google Home Max. That high-end version of the Google Home family is going on sale in those countries today.
In addition, Google also today announced that a number of new devices will soon support the Assistant, including the tado° thermostats, a number of new security and smart home hubs (though not, of course, Amazon’s own Ring Alarm), smart bulbs and appliances, including the iRobot Roomba 980, 896 and 676 vacuums. Who wants to have to push a button on a vacuum, after all.

Source: Gadgets – techcrunch