Europe’s data strategy aims to tip the scales away from big tech

Google wants to organize the world’s information but European lawmakers are in a rush to organize the local digital sphere and make Europe “the most data-empowered continent in the world”, internal market commissioner Thierry Breton said today, setting out the thinking behind the bloc’s data strategy during a livestreamed discussion organized by the Brussels-based economic think tank, Bruegel.

Rebalancing big data power dynamics to tip the scales away from big tech is another stated aim.

Breton likened the EU’s ambitious push to encourage industrial data sharing and rebalance platform power to work done in the past to organize the region’s air space and other physical infrastructure — albeit, with a lot less time to get the job done given the blistering pace of digital innovation.

“This will require of course political vision — that we have — and willingness, that I believe we have too, and smart regulation, hopefully you will judge, to set the right rules and investment in key infrastructure,” said Breton.

During the talk, he gave a detailed overview of how the flotilla of legislative proposals which are being worked on by EU lawmakers will set rules intended to support European businesses and governments to safely unlock the value of industrial and public data and drive the next decades of economic growth.

“We have been brave enough to set our rules in the personal data sphere and this is what we need to do now for government and public and industrial data. Set the rules. The European rules. Everyone will be welcome in Europe, that’s extremely important — provided they respect our rules,” said Breton.

“We don’t have one minute to lose,” he added. “The battle for industrial data is starting now and the battlefield may be Europe so we need to get ready — and this is my objective.”

EU lawmakers are drafting rules for how (non-personal) data can be used and shared; who will get access to them; and how rights can be guaranteed under the framework, per Breton. And he argued that concerns raised by European privacy challenges to international data transfers — reflected in the recent Schrems II ruling — are not limited to privacy and personal data. 

“These worries are in fact at the heart of the Single Market for data that I am building,” he said. “These worries are clear in the world we are entering when individuals or companies want to keep control over its data. The key question is, therefore, how to organize this control while allowing data flow — which is extremely important in the data economy.”

An open single European market for data must recognize that not all data are the same — “in terms of their sensitivity” — Breton emphasized, pointing to the EU’s General Data Protection Regulation (GDPR) data protection framework as “the proof of that”.

“Going forward, there are also sensitive industrial data that should benefit from specific conditions when they are accessed, used or shared,” he went on. “This is a case for instance for some sensitive public data [such as] from public hospitals, but also anonymized data that remains sensitive, mixed data which are difficult to handle.”

At one point during the talk he gave the example of European hospitals during the pandemic not being able to share data across borders to help in the fight against the virus because of the lack of a purpose-built framework to securely enable such data flows.

“I want our SMEs and startups, our public hospitals, our cities and many other actors to use more data — to make them available, to value them, to share them — but for this we need to generate the trust,” he added.

The first legislative plank of the transformation to a single European data economy is a Data Governance Act (DGA) — which Breton said EU lawmakers will present tomorrow, after a vote on the proposal this afternoon.

“With this act we are defining a European approach to data sharing,” he noted on the DGA. “This new regulation will facilitate data sharing across sectors and Member States. And it will put those who generate the data in the driving seat — moving away from the current practices of the big tech platforms.

“Concretely, with this legislation, we create the conditions to allow access to a reuse of sensitive public data, creating a body of harmonized rules for the single market.”

A key component of building the necessary trust for the data economy will mean creating rules that state “European highly sensitive data should be able to be stored and processed in the EU”, Breton also said, signalling that data localization will be a core component of the strategy — in line with a number of recent public remarks in which he’s argued it’s not protectionist for European data to be stored in Europe. 

“Without such a possibility Member States will never agree to open their data hold,” Breton went on, saying that while Europe will be “open” with data, it will not be offering a “naive” data free-for-all.

The Commission also wants the data framework to support an ecosystem of data brokers whose role Breton said will be to connect data owners and data users “in a neutral manner” — suggesting this will empower companies to have stronger control over the data they generate, (i.e the implication being rather than the current situation where data-mining platform giants can use their market power to asset-strip weaker third parties).

“We are shifting here the product,” he said. “And we promote also data altruism — the role of sharing data, industrial or personal, for common good.”

Breton also noted that the forthcoming data governance proposal will include a shielding provision — meaning data actors will be required to take steps to avoid having to comply with what he called “abusive and unlawful” data access requests for data held in Europe from third countries.

“This is a major point. It is not a question of calling into question our international judicial or policy cooperation. We cannot tolerate abuses,” he said, specifying three off-limits examples (“unauthorized access; access that do offer sufficient legal guarantees; or fishing expeditions), adding: “By doing so we are ensuring that European law and the guarantees it carries is respected. This is about enforcing our own rules.”

Breton also touched on other interlocking elements of the policy strategy which regional lawmakers see as crucial to delivering a functional data framework: Namely the Digital Services Act (DSA) and Digital Markets Act (DMA) — which are both due to be set out in detail early next month.

The DSA will put “a clear responsibility and obligation on platforms and the content that is spread”, said Breton.

While the companion ex ante regulation, the DMA, will “frame the behaviours of gatekeepers — of systemic actors in the Single Market — and target their behaviors against their competitors or customers”; aka further helping to pin and clip the wings of big tech.

“With this set of regulation I just want to set up the rules and that the rules are clear — based on our values,” he added.

He also confirmed that interoperability and portability will be a key feature of the EU’s hoped for data transformation.

“We are working on this on several strands,” he said on this. “The first is standards for interoperability. That’s absolutely key for sectoral data spaces that we will create and very important for the data flows. You will see that we will create a European innovation data board — set in the DGA today — which will help the Commission in setting and working the right standards.”

While combating “blocking efforts and abusive behaviors” by platform gatekeepers — which could otherwise put an artificial limit on the value of the data economy — will be “the job of the DMA”, he noted.

A fourth pillar of the data strategy — which Breton referred to as a “data act” — will be introduced in 2021, with the aim of “increasing fairness in the data economy by clarifying data usage rights in business to business and business to government settings”.

“We will also consider enhanced data portability rights to give individuals more control — which is extremely important — over the data they produce,” he added. “And we will have a look at the intellectual property rights framework.”

He also noted that key infrastructure investments will be vital — pointing to the Commission’s plan to build a European industrial cloud and related strategic tech investment priorities such as in compute power capacity, building out next-gen connectivity and support for cutting edges technologies like quantum encryption.

Privacy campaigner Max Schrems, who had been invited as the other guest speaker, raised the issue of enforceability — pointing out that Ireland’s data protection authority, which is responsible for overseeing a large number of major tech companies in the region, still hasn’t issued any decisions on cross-border complaints filed under the 2.5 year old GDPR framework.

Breton agreed that enforcement will be a vital piece of the puzzle — claiming EU lawmakers are alive to the problem of enforcement “bottlenecks” in the GDPR.

“We need definitely clear, predictable, implementable rules — and this is what is driving me when I am regulating against the data market. But also what you will find behind the DSA and the DMA with an ex ante regulation to be able to apply it immediately and everywhere in Europe, not only in one country, everywhere at the same time,” he said. “Just to be able to make sure that things are happening quick. In this digital space we have to be fast.”

“So we will again make sure in DSA that Member State authorities can ask platforms to remove immediately content cross-border — like, for example, if you want an immediate comparison, the European Arrest Warrant.”

The Commission will also have the power to step in via cooperation at the European level, Breton further noted.

“So you see we are putting in rules, we are not naive, we understand pretty well where we have the bottleneck — and again we try to regulate. And also, in parallel, that’s very important because like everywhere where you have regulation you need to have sanctions — you will have appropriate sanctions,” he said, adding: “We learn the lessons from the GDPR.”

Gretel announces $12M Series A to make it easier to anonymize data

As companies work with data, one of the big obstacles they face is making sure they are not exposing personally identifiable information (PII) or other sensitive data. It usually requires a painstaking manual effort to strip out that data. Gretel, an early stage startup, wants to change that by making it faster and easier to anonymize data sets. Today the company announced a $12 million Series A led by Greylock. The company has now raised $15.5 million.

Gretel founder and CEO Alex Watson says that his company was founded to make it simpler to anonymize data and unlock data sets that were previously out of reach because of privacy concerns.

“As a developer, you want to test an idea or build a new feature, and it can take weeks to get access to the data you need. Then essentially it boils down to getting approvals to get started, then snapshotting a database, and manually removing what looks like personal data and hoping that you got everything,”

Watson, who previously worked as a GM at AWS, believed that there needed to be a faster and more reliable way to anonymize the data, and that’s why he started Gretel. The first product is an open source, synthetic machine learning library for developers that strips out personally identifiable information.

“Developers use our open source library, which trains machine learning models on their sensitive data, then as that training is happening we are enforcing something called differential privacy, which basically ensures that the model doesn’t memorize details about secrets for individual people inside of the data,” he said. The result is a new artificial data set that is anonymized and safe to share across a business.

The company was founded last year, and they have actually used this year to develop the open source product and build an open source community around it. “So our approach and our go-to-market here is we’ve open sourced our underlying libraries, and we will also build a SaaS service that makes it really easy to generate synthetic data and anonymized data at scale,” he said.

As the founders build the company, they are looking at how to build a diverse and inclusive organization, something that they discuss at their regular founders’ meetings, especially as they look to take these investment dollars and begin to hire additional senior people.

“We make a conscious effort to have diverse candidates apply, and to really make sure we reach out to them and have a conversation, and that’s paid off, or is in the process of paying off I would say, with the candidates in our pipeline right now. So we’re excited. It’s tremendously important that we avoid group think that happens so often,” he said.

The company doesn’t have paying customers, but the plan is to build off the relationships it has with design partners and begin taking in revenue next year. Sridhar Ramaswamy, the partner at Greylock, who is leading the investment, says that his firm is placing a bet on a pre-revenue company because he sees great potential for a service like this.

“We think Gretel will democratize safe and controlled access to data for the whole world the way Github democratized source code access and control,” Ramaswamy said.

EU’s Google-Fitbit antitrust decision deadline pushed into 2021

The deadline for Europe to make a call on the Google -Fitbit merger has been pushed out again — with EU regulators now having until January 8, 2021, to take a decision.

The latest change to the provisional deadline, spotted earlier by Reuters, could be the result of one of the parties asking for more time.

Last month the deadline for a decision was extended until December 23 — potentially pushing the decision out beyond a year after Google announced its intention to buy Fitbit, back in November 2019. So if the tech giant was hoping for a simple and swift regulatory rubberstamping its hopes have been diminishing since August when the Commission announced it was going to dig into the detail. Once bitten and all that.

The proposed Fitbit acquisition also comes as Alphabet, Google’s parent, is under intense antitrust scrutiny on multiple fronts on home turf.

Google featured prominently in a report by the House Judiciary Committee on big tech antitrust concerns earlier this month, with US lawmakers recommending a range of remedies — including breaking up platform giants.

European lawmakers are also in the process of drawing up new rules to regulate so-called ‘gatekeeper’ platforms — which would almost certainly apply to Google. A legislative proposal on that is expected before the end of this year, which means it may appear before EU regulators have taken a decision on the Google-Fitbit deal. (And one imagines Google isn’t exactly stoked about that possibility.)

Both competition and privacy concerns have been raised against allowing Google get its hands on Fitbit users’ data.

The tech giant has responded by offering a number of pledges to try to convince regulators — saying it would not use Fitbit health and wellness data for ads and offering to have data separation requirements monitored. It has also said it would commit to maintain third parties’/rivals’ access to its Android ecosystem and Fitbit’s APIs.

However rival wearable makers have continued to criticize the proposed merger. And, earlier this week, consumer protection and human rights groups issued a joint letter — urging regulators to only approve the takeover if “merger remedies can effectively prevent [competition and privacy] harms in the short and long term”.

One thing is clear: With antitrust concerns now writ large against ‘big tech’ the era of ‘friction-free’ acquisitions looks to be behind Google et al.

Idinvest shares some trends on European consumer tech

French VC firm Idinvest has compiled some data about the European tech ecosystem. The firm has decided to focus on consumer tech in general, and there are some interesting trends that I’m going to break out here. You can read the full report here.

The team has analyzed and surveyed 1,500 companies over multiple months. And Idinvest has identified rising stars in multiple different verticals, such as fintech, mobility, healthcare or travel. If a startup has raised more than €100 million or has been acquired, Idinvest considers them as giants already.

Lesson #1: VC funding is growing at a faster pace in Europe than in the rest of the world

Notice that jump from €4.4 billion in 2014 to €16.6 billion in 2019.

Lesson #2: The European fintech boom is real

Fintech is now the largest vertical in Europe. On average, European startups attract 16% of global VC investments. But Europe is grabbing a bigger piece of that pie with fintech as European fintech startups have attracted 26% of total VC investments in that space.

And it’s not just challenger banks, such as N26, Monzo or Revolut. There are trading startups, lending startups, API-driven companies and more.

Lesson #3: Mobility is fragmented

While mobility is a huge vertical in Europe, there are countless of players that do more or less the same thing — multiple scooter startups (Voi, Dott, Tier…), multiples ride-hailing startups (Bolt, Heetch, FreeNow, Cabify…), etc.

Some of them are thriving, but you’re not going to see any of the big names you’d expect in the list of giants as those companies are not European — Uber, Didi, Bird… In other words, Europe is mostly a fast follower in this space, and it’s been working fairly well.

Lesson #4: Health is regulated as hell

This isn’t surprising, but European healthcare startups seem to be mostly active in Europe. Similarly, American healthcare startups don’t seem to have a huge presence in Europe.

I personally think European healthcare startups have a shot at becoming global leaders for two reasons. First, it’s a privacy-sensitive industry and European startups tend to care more about privacy due to the legal framework. Second, the tech lash against big tech companies, such as Google, Facebook, Amazon, Microsoft and Apple, is going to be particularly strong with healthcare products.

Lesson #5: Food startups could reshape cities

Let me quote Idinvest’s report directly here because it is spot on: “In the Middle-Age, posting houses became inns to feed and offer rest to travelers. In the 20th century, McDonald’s restaurants grew exponentially around the U.S. road infrastructure. In the past 18-24 months, we have seen the rise of dark kitchens (Keatz, Taster) and dark groceries (Glovo) which are both piggybacking this new food delivery infrastructure and adding to it a real estate layer. Dark kitchens and groceries are selling products exclusively through delivery. The customer facing location of a restaurant or a grocery shop is replaced by a cheaper real estate location optimized for order preparations.”

Lesson #6: Travel is a bigger industry in Europe than in the U.S.

Tourists spend more money in Europe than in the U.S. Given that many travel startups start with a simple marketplace to improve liquidity, pricing, listings, discovery or open up a whole new segment, it makes sense to start it from Europe.

Lesson #7: Gaming is big in Europe

Gaming, and in particular mobile gaming, has been thriving in Europe. Many casual games have emerged from European startups. There’s no European Netflix, but Minecraft, Candy Crush Saga and Angry Birds were all born in Europe.

All the rest

Idinvest’s report covers other verticals but I don’t have much to add. I’m just going to share the mapping of those verticals and you can read the report if you want to dig deeper.

Slack now strips location data from uploaded images

Slack has started to strip uploaded photos of their metadata.

What may seem like an inconsequential change to how the tech giant handles storing files on its servers, it will make it far more difficult to trace photos back to their original owners.

Almost every digital file — from documents on your computer to photos taken on your phone — contains metadata. That’s data about the file itself, such as how big the file is, when it was created, and by whom. Photos and videos often include the precise coordinates of where they were taken.

But that can be a problem for higher-risk Slack users, like journalists and activists, who have to take greater security precautions to keep their sources safe. The metadata inside photos can out sources, deanonymize whistleblowers, or otherwise make it easier for unfriendly governments to target individuals. Even if a journalist removes the metadata from a photo before publishing, a copy of the photo — with its metadata — may remain on Slack’s servers. Whether a hacker breaks in or a government demands the data, it can put sources at risk.

Slack confirmed to TechCrunch that it’s now started to strip photo metadata, including locations.

“We can confirm that we recently began stripping EXIF (exchangeable image file) metadata from images uploaded to Slack, including GPS coordinates,” said a Slack spokesperson.

TechCrunch tested this by uploading a photo containing location data to Slack, then pulling a copy of that uploaded image from the server. The copy from the server, when checked again, no longer had location data embedded in the document. Some metadata remains, like the make and model of the device that took the photo.

Slack did not say what prompted the change.

Facebook now allows users in the U.S. & Canada to export photos and videos to Google Photos

Facebook is today rolling out a tool that will allow users in the U.S. and Canada to export their Facebook photos and videos to Google Photos. This data portability tool was first introduced in Ireland in December, and has since been made available to other international markets.

To use the feature, Facebook users will need to click on “Settings,” followed by”Your Facebook Information,” then “Transfer a Copy of Your Photos and Videos.” Facebook will ask you to verify your password to confirm your identity in order to proceed. On the next screen, you’ll be able to choose “Google Photos” as the destination from the “Choose Destination” drop-down box that appears. You’ll also need your Google account information to authenticate with its service before the transfer begins.

The tool’s release comes about by way of Facebook’s participation in the Data Transfer Project, a collaborative effort with other tech giants including Apple, Google, Microsoft, and Twitter, which focuses on a building out common ways for people to transfer their data between online services.

Of course, it also serves as a way for the major tech companies to fend off potential regulation as they’ll be able to point to tools like this as a way to prove they’re not holding their users hostage — if people are unhappy, they can just take their data and leave!

Facebook’s Director of Privacy and Public Policy Steve Satterfield, in an interview with Reuters on Thursday, essentially confirmed the tool is less about Facebook being in service to its users, and more about catering to policymakers’ and regulators’ demands.

“…It really is an important part of the response to the kinds of concerns that drive antitrust regulation or competition regulation,” Satterfield told the news outlet.

The launch also arrives conveniently ahead of a Federal Trade Commission hearing on September 22 that will be focused on data portability. Facebook said it would participate in that hearing, if approached, the report noted.

In Facebook’s original announcement about the tool’s launch last year, it said it would expand the service to include more than just Google Photos in the “near future.”

The transfer tool is not the only way to get your data out of Facebook. The company has offered Download Your Information since 2010. But once you have your data, there isn’t much else you can do with it — Facebook hasn’t had any large-scale rivals since older social networks like MySpace, FriendFeed (RIP!), and Friendster died and Google+ failed.

In addition to the U.S. and Canada, the photo transfer tool has been launched in several other markets, including Europe and Latin America.


Free tool helps manufacturers map where COVID-19 impacts supply chain

Assent Compliance, a company that helps large manufacturers like GE and Rolls Royce manage complex supply chains through an online data exchange, announced a new tool this week that lets any company, whether they’re a customer or not, upload bills of materials and see on a map where COVID-19 is having an impact on their supply chain.

Company co-founder Matt Whitteker, says the Ottawa startup focuses on supply chain data management, which means it has the data and the tooling to develop a data-driven supply chain map based on WHO data identifying COVID hotspots. He believes that his is the only company to have done this.

“We’re the only ones that have taken supply chain data and applied it to this particular pandemic. And it’s something that’s really native to our platform. We have all that data on hand — we have location data for suppliers. So it’s just a matter of applying that with third party data sources (like the WHO data), and then extracting valuable business intelligence from it,” he said.

If you want to participate, you simply go to the company website and fill out a form. A customer success employee will contact you and walk you through the process of uploading your data to the platform. Once they have your data, they generate a map showing the parts of the world where your supply chain is most likely to be disrupted, identifying the level of risk based on your individual data.

The company captures supply chain data as part of the act of doing business with 1000 customers and 500,000 suppliers currently on their platform. “When companies are manufacturing products they have what’s called a bill of materials, kind of like a recipe. And companies upload their bill of materials that basically outlines all their parts, components and commodities, and who they get them from, which basically represents their supply chain,” Whitteker explained.

After the company uploads the bill of materials, Assent opens a portal for the companies to exchange data, which might be tax forms, proof of sourcing or any kind of information and documentation the manufacturer needs to comply with legal and regulatory rules around procurement of a given part.

They decided to start building the COVID-19 map application when they recognized that this was going to have the biggest supply chain disruption the world has seen since World War II. It took about a month to build it. It went into Beta last week with customers and over 350 signed up in the first two hours. This week, they made the tool generally available to anyone, even non-customers, for free.

The company was founded in 2016 and raised $220 million, according to Whitteker.

Datastax acquires The Last Pickle

Data management company Datastax, one of the largest contributors to the Apache Cassandra project, today announced that it has acquired The Last Pickle (and no, I don’t know what’s up with that name either), a New Zealand-based Cassandra consulting and services firm that’s behind a number of popular open-source tools for the distributed NoSQL database.

As Datastax Chief Strategy Officer Sam Ramji, who you may remember from his recent tenure at Apigee, the Cloud Foundry Foundation, Google and Autodesk, told me, The Last Pickle is one of the premier Apache Cassandra consulting and services companies. The team there has been building Cassandra-based open source solutions for the likes of Spotify, T Mobile and AT&T since it was founded back in 2012. And while The Last Pickle is based in New Zealand, the company has engineers all over the world that do the heavy lifting and help these companies successfully implement the Cassandra database technology.

It’s worth mentioning that Last Pickle CEO Aaron Morton first discovered Cassandra when he worked for WETA Digital on the special effects for Avatar, where the team used Cassandra to allow the VFX artists to store their data.

“There’s two parts to what they do,” Ramji explained. “One is the very visible consulting, which has led them to become world experts in the operation of Cassandra. So as we automate Cassandra and as we improve the operability of the project with enterprises, their embodied wisdom about how to operate and scale Apache Cassandra is as good as it gets — the best in the world.” And The Last Pickle’s experience in building systems with tens of thousands of nodes — and the challenges that its customers face — is something Datastax can then offer to its customers as well.

And Datastax, of course, also plans to productize The Last Pickle’s open-source tools like the automated repair tool Reaper and the Medusa backup and restore system.

As both Ramji and Datastax VP of Engineering Josh McKenzie stressed, Cassandra has seen a lot of commercial development in recent years, with the likes of AWS now offering a managed Cassandra service, for example, but there wasn’t all that much hype around the project anymore. But they argue that’s a good thing. Now that it is over ten years old, Cassandra has been battle-hardened. For the last ten years, Ramji argues, the industry tried to figure out what the de factor standard for scale-out computing should be. By 2019, it became clear that Kubernetes was the answer to that.

“This next decade is about what is the de facto standard for scale-out data? We think that’s got certain affordances, certain structural needs and we think that the decades that Cassandra has spent getting harden puts it in a position to be data for that wave.”

McKenzie also noted that Cassandra provides users with a number of built-in features like support for mutiple data centers and geo-replication, rolling updates and live scaling, as well as wide support across programming languages, give it a number of advantages over competing databases.

“It’s easy to forget how much Cassandra gives you for free just based on its architecture,” he said. “Losing the power in an entire datacenter, upgrading the version of the database, hardware failing every day? No problem. The cluster is 100 percent always still up and available. The tooling and expertise of The Last Pickle really help bring all this distributed and resilient power into the hands of the masses.”

The two companies did not disclose the price of the acquisition.

Art on Blockchain pioneer Verisart raises $2.5M for art and collectibles certification

A lot of talk has been made about verifying valuable items on an immutable blockchain, but the main pioneer in this space has been Verisart, which appeared a few years ago to use a blockchain to create certification for the fine art and collectibles market. But despite the blockchain hype of the last few years, Verisart eschewed the fund-raising bonanza, preferring instead to perfect its model and build partnerships.

That changes today with the news that it has raised $2.5 million in seed financing in a round led by Galaxy Digital EOS VC Fund. Further investment has come from existing investors Sinai Ventures and Rhodium. The funding will be used to expand Verisart’s commercial platform for authentication and further expand in the art world.

Co-Founder and CEO Robert Norton commented: “With this new round of funding, we’re able to scale our business and ramp up our partnership integrations. The art world is quickly realizing that blockchain provides a new standard in provenance and record-keeping and we’re looking forward to extending these services to the industry.”

The $325mm Galaxy EOS VC Fund is a partnership between Galaxy Digital, a blockchain-focused merchant bank, and, the publisher of EOSIO, the blockchain protocol.

The funding will go towards extending the product and engineering team and launching a suite of premium services aimed at artists, galleries and collectors. The company recently appointed Paul Duncan, formerly the founding CTO of Borro, the online lending platform for luxury assets, to lead the engineering team.

In 2015, Verisart was the first company to apply blockchain technology to the physical art and collectibles market. It’s also working with some of the world’s best-known artists including Ai Wei Wei and Shepard Fairey to certify their works of art. In 2018, Verisart won the ‘Hottest Blockchain DApp’ award at The Europas, the European tech startup awards.

It’s also been the first blockchain certification provider on Shopify to offer digital certification for limited editions, artworks and collectibles.

Other players are now entering this growing blockchain-for-art market. Codex Protocol is a new startup also putting art on the blockchain.

Target’s personalized loyalty program launches nationwide next month

Target today announced its new, data-driven loyalty program, Target Circle, will launch nationwide on October, 6th, following a year and a half of beta testing in select markets. The program combines a variety of features including 1% back on purchases, birthday rewards, and personalized offers and savings designed to make the program more attractive to consumers.

It also includes a way for customers to vote on Target’s community giving initiatives, which helps directs Target’s giving to around 800 nonprofits in the U.S.


The new program is designed to lure in customers who have yet to adopt Target’s store card, REDcard. While REDcard penetration today is around 23%, that number has remained fairly consistent over time — in fact, it’s down about one percentage point from a year ago.

With Target Circle, however, the retailer has another means of generating loyalty and establishing a connection with its customers on a more individualized basis.

A big part of that is the personalized aspect of the Target Circle program. In addition to the “birthday perks” (an easy way to grab some demographic data), customers will also get special discounts on the categories they “shop most often” — meaning, Target will be tapping into its treasure trove of customer purchase history to make recommendations from both in-store and online purchases along with other signals.

“As guests shop, Target leverages information about their shopping behaviors and purchases to share relevant offers that create an even more personalized, seamless shopping experience,” a company spokesperson explained, when asked for details about the data being used. “For example, a guest who frequently shops Target for baby products may receive a special offer on their next purchase of baby items.”

TargetCircle NonBeta 19 Brand RGB Logo Red

According to a recent retail study from Avionos, 78% of consumers are more likely to purchase from retailers that better personalize their experiences and 63% are more open to sharing personal information if retailers can better anticipate needs.

And as some may recall, Target is already scary good at personalization.

In one notable case, the retailer figured out a teen girl was pregnant before her father did, and sent her coupons for baby items. The dad, understandably, was angry — until he found out that Target was right.

That story was a high-profile example of the data collection and analysis big retailers are doing all the time, though. Target Circle simply formalizes this into an opt-in program instead of an opt-out experience.

As part of the changes, Target’s Cartwheel savings are rolling into Target Circle where they’ll be rebranded as Target Circle offers. 

TargetCircle inApp

Circle members will also get early access to special sales throughout the year — that is, the events people line up for, like they did for the Lilly Pulitzer fashion line or more recently, the quickly sold out Vineyard Vines collection.

Target says, in time, it will come up with “even more personalized, relevant ways” to make shopping easier for its customers.

The new program is meant to complement the REDcard, which will increase the cashback to 5% when used. But REDcard holders can still join Circle to take advantage of the other perks.


“Our guests are at the center of everything we do, and we’re always looking for ways to create even easier, more rewarding shopping experiences that give them another reason to choose Target,” said Rick Gomez, Target executive vice president, and chief marketing and digital officer, in a statement. “We worked directly with guests to develop Target Circle, and the program includes the benefits and perks they told us were most important to them, from earning on every trip to having the opportunity to help Target make a positive impact in their local communities,” he said.

The loyalty program had been in testing in Dallas-Ft. Worth, Charlotte, Denver, Indianapolis, Kansas City and Phoenix over the past 18 months.

Though not having Amazon’s scale, Target has done well at quickly innovating to keep up with today’s pace of e-commerce. In short order, it has made over its stores to make more room for order pickups and online grocery, and has launched and expanded new services like Target Restock (next-day), Shipt (same day delivery) and Drive Up (same day pickup). The changes have been paying off with Target beating on its latest earnings with $18.42 billion in revenue and profits of $938 million.