Google Assistant’s ‘Continued Conversation’ feature is now live

Google I/O was awash with Assistant news, but Duplex mystery aside, Continued Conversation was easily one of the most compelling announcements of the bunch. The feature is an attempt to bring more naturalized conversation to the AI — a kind of holy grail with these sorts of smart assistants.

Continued Conversation is rolling out to Assistant today for users in the U.S. with a Home, Home Mini and Home Max. The optional setting is designed to offer a more natural dialogue, so users don’t have to “Hey Google” Assistant every time they have a request. Google offers the following example in a blog post that just went up,

So next time you wake up and the skies are grey, just ask “Hey Google, what’s the weather today?”… “And what about tomorrow?”… “Can you add a rain jacket to my shopping list”… “And remind me to bring an umbrella tomorrow morning”…“Thank you!”

You’ll need to access the Assistant settings on an associated device in order to activate the feature. And that initial “Ok Google” or “Hey Google” will still have to be spoken to trigger the Assistant. From there, it will stay listening for up to eight seconds without detecting any speech. It’s not exactly a dialogue, so much as a way of easing the awkward interaction of having to repeat the same command over and over again. 

Given all of the recent privacy concerns that have arisen as smart speakers and the like have exploded in popularity, it’s easy to see why Google’s gone and taken all of these safeguards to assure users that the devices aren’t listening for anything beyond a wake word.

An extra eight seconds isn’t much, but those who are already skeptical about product privacy might want to keep it off, for good measure.

Football matches land on your table thanks to augmented reality

It’s World Cup season, so that means that even articles about machine learning have to have a football angle. Today’s concession to the beautiful game is a system that takes 2D videos of matches and recreates them in 3D so you can watch them on your coffee table (assuming you have some kind of augmented reality setup, which you almost certainly don’t). It’s not as good as being there, but it might be better than watching it on TV.

The “Soccer On Your Tabletop” system takes as its input a video of a match and watches it carefully, tracking each player and their movements individually. The images of the players are then mapped onto 3D models “extracted from soccer video games,” and placed on a 3D representation of the field. Basically they cross FIFA 18 with real life and produce a sort of miniature hybrid.

Considering the source data — two-dimensional, low-resolution, and in motion — it’s a pretty serious accomplishment to reliably reconstruct a realistic and reasonably accurate 3D pose for each player.

Now, it’s far from perfect. One might even say it’s a bit useless. The characters’ positions are estimated, so they jump around a bit, and the ball doesn’t really appear much, so everyone appears to just be dancing around on a field. (That’s on the to-do list.)

But the idea is great, and this is a working if highly limited first shot at it. Assuming the system could ingest a whole game based on multiple angles (it could source the footage directly from the networks), you could have a 3D replay available just minutes after the actual match concluded.

Not only that, but wouldn’t it be cool to be able to gather round a central location and watch the game from multiple angles on it? I’ve always thought one of the worst things about watching sports on TVs is everyone is sitting there staring in one direction, seeing the exact same thing. Letting people spread out, pick sides, see things from different angles to analyze strategies — that would be fantastic.

All we need is for someone to invent a perfect, affordable holographic display that works from all angles and we’re set.

The research is being presented at the Computer Vision and Pattern Recognition conference in Salt Lake City, and it’s a collaboration between Facebook, Google, and the University of Washington.

What’s under those clothes? This system tracks body shapes in real time

With augmented reality coming in hot and depth tracking cameras due to arrive on flagship phones, the time is right to improve how computers track the motions of people they see — even if that means virtually stripping them of their clothes. A new computer vision system that does just that may sound a little creepy, but it definitely has its uses.

The basic problem is that if you’re going to capture a human being in motion, say for a movie or for an augmented reality game, there’s a frustrating vagueness to them caused by clothes. Why do you think motion capture actors have to wear those skintight suits? Because their JNCO jeans make it hard for the system to tell exactly where their legs are. Leave them in the trailer.

Same for anyone wearing a dress, a backpack, a jacket — pretty much anything other than the bare minimum will interfere with the computer getting a good idea of how your body is positioned.

The multi-institutional project (PDF), due to be presented at CVPR in Salt Lake City, combines depth data with smart assumptions about how a body is shaped and what it can do. The result is a sort of X-ray vision, revealing the shape and position of a person’s body underneath their clothes, that works in real time even during quick movements like dancing.

The paper builds on two previous methods, DynamicFusion and BodyFusion. The first uses single-camera depth data to estimate a body’s pose, but doesn’t work well with quick movements or occlusion; the second uses a skeleton to estimate pose but similarly loses track during fast motion. The researchers combined the two approaches into “DoubleFusion,” essentially creating a plausible skeleton from the depth data and then sort of shrink-wrapping it with skin at an appropriate distance from the core.

As you can see above, depth data from the camera is combined with some basic reference imagery of the person to produce both a skeleton and track the joints and terminations of the body. On the right there, you see the results of just DynamicFusion (b), just BodyFusion (c), and the combined method (d).

The results are much better than either method alone, seemingly producing excellent body models from a variety of poses and outfits:

Hoodies, headphones, baggy clothes, nothing gets in the way of the all-seeing eye of DoubleFusion.

One shortcoming, however, is that it tends to overestimate a person’s body size if they’re wearing a lot of clothes — there’s no easy way for it to tell whether someone is broad or they are just wearing a chunky sweater. And it doesn’t work well when the person interacts with a separate object, like a table or game controller — it would likely try to interpret those as weird extensions of limbs. Handling these exceptions is planned for future work.

The paper’s first author is Tao Yu of Tsinghua University in China, but researchers from Beihang University, Google, USC, and the Max Planck Institute were also involved.

“We believe the robustness and accuracy of our approach will enable many applications, especially in AR/VR, gaming, entertainment and even virtual try-on as we also reconstruct the underlying body shape,” write the authors in the paper’s conclusion. “For the first time, with DoubleFusion, users can easily digitize themselves.”

There’s no use denying that there are lots of interesting applications of this technology. But there’s also no use denying that this technology is basically X-ray Spex.

Google’s Datally app adds more ways to limit mobile data usage

In November, Google introduced Datally, a data-saving app largely aimed at emerging markets where users often rely on prepaid SIM cards, and don’t have access to all-you-can-eat unlimited data plans. The app lets users granularly control which apps can use data, which resulted in a 30% savings on data usage during pilot testing and now saves users 21%, on average. Today, Google is giving Datally an upgrade with several new features that will help users cut data usage even further.

One key feature is the introduction of daily limits, which allow you to control your data usage on a per-day basis. This one is more about creating better habits around data consumption, so you don’t accidentally burn through too much data in a day, then end up without any data left before the month ends.

This also ties into to Google’s larger push to give users more insights into their own behavior when using mobile devices, and more tools to combat the addictive nature of smartphones.

The company in May announced new time management features for Android users, as well as new features to help users silence their phones and wind down at bedtime. It also has software for parents to limit screen time for their children.

While the Datally feature is primarily about conserving data, it acknowledges that it’s often easy to get sucked into your smartphone and lose track of how much time – and then, consequently, how much mobile data – you want to spend.

Another new Datally feature lets you enable a guest mode where you control how much data someone borrowing your phone can use – helpful in those situations where phones are shared among family members.

The “Unused Apps” feature, meanwhile, highlights those apps you’ve stopped using but could still be leaking data. Google notes that, for many people, 20 percent of mobile data is from apps using data in the background that haven’t been opened for over a month. Unused Apps will find those culprits so you can uninstall them, it says.

And finally, a new Wi-Fi Map shows all the nearby Wi-Fi networks so you can find those with a good signal and stop using your mobile data.

Though Datally is aimed at helping the “Next Billion Users” come online, it’s not limited to emerging markets. Anyone concerned with data usage can give it a shot.

The new additions are rolling out to Datally today, says Google.

The Android app, which has been downloaded over 10 million times, is free on Google Play.

Google makes $550M strategic investment in Chinese e-commerce firm JD.com

Google has been increasing its presence in China in recent times, and today it has continued that push by agreeing to a strategic partnership with e-commerce firm JD.com which will see Google purchase $550 million of shares in the Chinese firm.

Google has made investments in China, released products there and opened up offices that include an AI hub, but now it is working with JD.com largely outside of China. In a joint release, the companies said they would “collaborate on a range of strategic initiatives, including joint development of retail solutions” in Europe, the U.S. and Southeast Asia.

The goal here is to merge JD.com’s experience and technology in supply chain and logistics — in China, it has opened warehouses that use robots rather than workers — with Google’s customer reach, data and marketing to produce new kinds of online retail.

Initially, that will see the duo team up to offer JD.com products for sale on the Google Shopping platform across the word, but it seems clear that the companies have other collaborations in mind for the future.

JD.com is valued at around $60 billion, based on its NASDAQ share price, and the company has partnerships with the likes of Walmart and it has invested heavily in automated warehouse technology, drones and other ‘next-generation’ retail and logisitics.

The move for a distribution platform like Google to back a service provider like JD.com is interesting since the company, through search and advertising, has relationships with a range of e-commerce firms including JD.com’s arch rival Alibaba.

But it is a sign of the times for Google, which has already developed relationships with JD.com and its biggest backer Tencent, the $500 billion Chinese internet giant. All three companies have backed Go-Jek, the ride-hailing challenger in Southeast Asia, while Tencent and Google previously inked a patent sharing partnership and have co-invested in startups such as Chinese AI startup XtalPi.

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

Judge says ‘literal but nonsensical’ Google translation isn’t consent for police search

Machine translation of foreign languages is undoubtedly a very useful thing, but if you’re going for anything more than directions or recommendations for lunch, its shallowness is a real barrier. And when it comes to the law and constitutional rights, a “good enough” translation doesn’t cut it, a judge has ruled.

The ruling (PDF) is not hugely consequential, but it is indicative of the evolving place in which translation apps find themselves in our lives and legal system. We are fortunate to live in a multilingual society, but for the present and foreseeable future it seems humans are still needed to bridge language gaps.

The case in question involved a Mexican man named Omar Cruz-Zamora, who was pulled over by cops in Kansas. When they searched his car, with his consent, they found quite a stash of meth and cocaine, which naturally led to his arrest.

But there’s a catch: Cruz-Zamora doesn’t speak English well, so the consent to search the car was obtained via an exchange facilitated by Google Translate — an exchange that the court found was insufficiently accurate to constitute consent given “freely and intelligently.”

The fourth amendment prohibits unreasonable search and seizure, and lacking a warrant or probable cause, the officers required Cruz-Zamora to understand that he could refuse to let them search the car. That understanding is not evident from the exchange, during which both sides repeatedly fail to comprehend what the other is saying.

Not only that, but the actual translations provided by the app weren’t good enough to accurately communicate the question. For example, the officer asked “¿Puedo buscar el auto?” — the literal meaning of which is closer to “can I find the car,” not “can I search the car.” There’s no evidence that Cruz-Zamora made the connection between this “literal but nonsensical” translation and the real question of whether he consented to a search, let alone whether he understood that he had a choice at all.

With consent invalidated, the search of the car is rendered unconstitutional, and the charges against Cruz-Zamora are suppressed.

It doesn’t mean that consent is impossible via Google Translate or any other app — for example, if Cruz-Zamora had himself opened his trunk or doors to allow the search, that likely would have constituted consent. But it’s clear that app-based interactions are not a sure thing. This will be a case to consider not just for cops on the beat looking to help or investigate people who don’t speak English, but in courts as well.

Providers of machine translation services would have us all believe that those translations are accurate enough to use in most cases, and that in a few years they will replace human translators in all but the most demanding situations. This case suggests that machine translation can fail even the most basic tests, and as long as that possibility remains, we have to maintain a healthy skepticism.

Gmail proves that some people hate smart suggestions

Gmail has recently introduced a brand new redesign. While you can disable or ignore most of the new features, Gmail has started resurfacing old unanswered emails with a suggestion that you should reply. And this is what it looks like:

The orange text immediately grabs your attention. By bumping the email thread to the top of your inbox, Gmails also breaks the chronological order of your inbox.

Gmail is also making a judgement by telling you that maybe you should have replied and you’ve been procrastinating. Social networks already bombard us constantly with awful content that makes us sad or angry. Your email inbox shouldn’t make you feel guilty or stressed.

Even if the suggestions can be accurate, it’s a bit creepy, it’s poorly implemented and it makes you feel like you’re no longer in control of your inbox.

There’s a reason why Gmail lets you disable all the smart features. Some users don’t want smart categories, important emails first and smart reply suggestions. Arguably, the only smart feature everyone needs is the spam filter.

A pure chronological feed of your email messages is incredibly valuable as well. That’s why many Instagram users are still asking for a chronological feed. Sure, algorithmic feeds can lead to more engagement and improved productivity. Maybe Google conducted some tests and concluded that you end up answering more emails if you let Gmail do its thing.

But you may want to judge the value of each email without an algorithmic ranking.

VCs could spot the next big thing without any bias. Journalists could pay attention to young and scrappy startups as much as the new electric scooter startup in San Francisco. Universities could give a grant to students with unconventional applications. The HR department of your company could look at all applications without following Google’s order.

When the Gmail redesign started leaking, a colleague of mine said “I look forward to digging through settings to figure out how to turn this off.” And the good news is that you can turn it off.

There are now two options to disable nudges in the settings on the web version of Gmail. You can tick off the boxes “Suggest emails to reply to” and “Suggest emails to follow up on” if you don’t want to see this orange text ever again. But those features should have never been enabled by default in the first place.

AI edges closer to understanding 3D space the way we do

If I show you single picture of a room, you can tell me right away that there’s a table with a chair in front of it, they’re probably about the same size, about this far from each other, with the walls this far away — enough to draw a rough map of the room. Computer vision systems don’t have this intuitive understanding of space, but the latest research from DeepMind brings them closer than ever before.

The new paper from the Google -owned research outfit was published today in the journal Science (complete with news item). It details a system whereby a neural network, knowing practically nothing, can look at one or two static 2D images of a scene and reconstruct a reasonably accurate 3D representation of it. We’re not talking about going from snapshots to full 3D images (Facebook’s working on that) but rather replicating the intuitive and space-conscious way that all humans view and analyze the world.

When I say it knows practically nothing, I don’t mean it’s just some standard machine learning system. But most computer vision algorithms work via what’s called supervised learning, in which they ingest a great deal of data that’s been labeled by humans with the correct answers — for example, images with everything in them outlined and named.

This new system, on the other hand, has no such knowledge to draw on. It works entirely independently of any ideas of how to see the world as we do, like how objects’ colors change towards their edges, how they get bigger and smaller as their distance changes, and so on.

It works, roughly speaking, like this. One half of the system is its “representation” part, which can observe a given 3D scene from some angle, encoding it in a complex mathematical form called a vector. Then there’s the “generative” part, which, based only on the vectors created earlier, predicts what a different part of the scene would look like.

(A video showing a bit more of how this works is available here.)

Think of it like someone hand you a couple pictures of a room, then asking you to draw what you’d see if you were standing in a specific spot in it. Again, this is simple enough for us, but computers have no natural ability to do it; their sense of sight, if we can call it that, is extremely rudimentary and literal, and of course machines lack imagination.

Yet there are few better words that describe the ability to say what’s behind something when you can’t see it.

“It was not at all clear that a neural network could ever learn to create images in such a precise and controlled manner,” said lead author of the paper, Ali Eslami, in a release accompanying the paper. “However we found that sufficiently deep networks can learn about perspective, occlusion and lighting, without any human engineering. This was a super surprising finding.”

It also allows the system to accurately recreate a 3D object from a single viewpoint, such as the blocks shown here:

I’m not sure I could do that.

Obviously there’s nothing in any single observation to tell the system that some part of the blocks extends forever away from the camera. But it creates a plausible version of the block structure regardless that is accurate in every way. Adding one or two more observations requires the system to rectify multiple views, but results in an even better representation.

This kind of ability is critical for robots especially because they have to navigate the real world by sensing it and reacting to what they see. With limited information, such as some important clue that’s temporarily hidden from view, they can freeze up or make illogical choices. But with something like this in their robotic brains, they could make reasonable assumptions about, say, the layout of a room without having to ground-truth every inch.

“Although we need more data and faster hardware before we can deploy this new type of system in the real world,” Eslami said, “it takes us one step closer to understanding how we may build agents that learn by themselves.”

Salesforce deepens data sharing partnership with Google

Last Fall at Dreamforce, Salesforce announced a deepening friendship with Google . That began to take shape in January with integration between Salesforce CRM data and Google Analytics 360 and Google BigQuery. Today, the two cloud giants announced the next step as the companies will share data between Google Analytics 360 and the Salesforce Marketing Cloud.

This particular data sharing partnership makes even more sense as the companies can share web analytics data with marketing personnel to deliver ever more customized experiences for users (or so the argument goes, right?).

That connection certainly didn’t escape Salesforce’s VP of product marketing, Bobby Jania. “Now, marketers are able to deliver meaningful consumer experiences powered by the world’s number one marketing platform and the most widely adopted web analytics suite,” Jania told TechCrunch.

Brent Leary, owner of the consulting firm CRM Essentials says the partnership is going to be meaningful for marketers. “The tighter integration is a big deal because a large portion of Marketing Cloud customers are Google Analytics/GA 360 customers, and this paves the way to more seamlessly see what activities are driving successful outcomes,” he explained.

The partnership involves four integrations that effectively allow marketers to round-trip data between the two platforms. For starters, consumer insights from both Marketing Cloud and Google Analytics 360, will be brought together into a single analytics dashboard inside Marketing Cloud. Conversely, Market Cloud data will be viewable inside Google Analytics 360 for attribution analysis and also to use the Marketing Cloud information to deliver more customized web experiences. All three of these integrations will be generally available starting today.

A fourth element of the partnership being announced today won’t be available in Beta until the third quarter of this year. “For the first time ever audiences created inside the Google Analytics 360 platform can be activated outside of Google. So in this case, I’m able to create an audience inside of Google Analytics 360 and then I’m able to activate that audience in Marketing Cloud,” Jania explained.

An audience is like a segment, so if you have a group of like-minded individuals in the Google analytics tool, you can simply transfer it to Salesforce Marketing Cloud and send more relevant emails to that group.

This data sharing capability removes a lot of the labor involved in trying to monitor data stored in two places, but of course it also raises questions about data privacy. Jania was careful to point out that the two platforms are not sharing specific information about individual consumers, which could be in violation of the new GDPR data privacy rules that went into effect in Europe at the end of last month.

“What we’re [we’re sharing] is either metadata or aggregated reporting results. Just to be clear there’s no personal identifiable data that is flowing between the systems so everything here is 100% GDPR-compliant,” Jania said.

But Leary says it might not be so simple, especially in light of recent data sharing abuses. “With Facebook having to open up about how they’re sharing consumer data with other organizations, companies like Salesforce and Google will have to be more careful than ever before about how the consumer data they make available to their corporate customers will be used by them. It’s a whole new level of scrutiny that has to be apart of the data sharing equation,” Leary said.

The announcements were made today at the Salesforce Connections conference taking place in Chicago this week.