Credit startup Migo expands to Brazil on $20M raise and Africa growth

After growing its lending business in West Africa, emerging markets credit startup Migo is expanding to Brazil on a $20 million Series B funding round led by Valor Group Capital.

The San Mateo based company — previously branded Mines.io — provides AI driven products to large firms so those companies can extend credit to underbanked consumers in viable ways.

That generally means making lending services to low-income populations in emerging markets profitable for big corporates, where they previously were not.

Founded in 2013, Migo launched in Nigeria, where the startup now counts fintech unicorn Interswitch and Africa’s largest telecom, MTN, among its clients.

Offering its branded products through partner channels, Migo has originated over 3 million loans to over 1 million customers in Nigeria since 2017, according to company stats.

“The global social inequality challenge is driven by a lack of access to credit. If you look at the middle class in developed countries, it is largely built on access to credit,” Migo founder and CEO Ekechi Nwokah told TechCrunch.

“What we are trying to do is to make prosperity available to all by reinventing the way people access and use credit,” he explained.

Migo does this through its cloud-based, data-driven platform to help banks, companies, and telcos make credit decisions around populations they previously may have bypassed.

These entities integrate Migo’s API into their apps to offer these overlooked market segments digital accounts and lines of credit, Nwokah explained.

“Many people are trying to do this with small micro-loans. That’s the first place you understand risk, but we’re developing into point of sale solutions,” he said.

Migo’s client consumers can access their credit-lines and make payments by entering a merchant phone number on their phone (via USSD) and then clicking on “Pay with Migo”. Migo can also be set up for use with QR codes, according to Nwokah.

He believes structural factors in frontier and emerging markets make it difficult for large institutions to serve people without traditional credit profiles.

“What makes it hard for the banks is its just too expensive,” he said of establishing the infrastructure, technology, and staff to serve these market segments.

Nwokah sees similarities in unbanked and underbanked populations across the world, including Brazil and African countries such as Nigeria.

“Statistically, the number of people without credit in Nigeria is about 90 million people and its about 100 million adults that don’t have access to credit in Brazil. The countries are roughly the same size and the problem is roughly the same,” he said.

On clients in Brazil, Migo has a number of deals in the pipeline — according to Nwokah — and has signed a deal with a big-name partner in the South American country of 290 million, but could not yet disclose which one.

Migo generates revenue through interest and fees on its products. With lead investor Valor Group Capital, new investors Africinvest and Cathay Innovation joined existing backers Velocity Capital and The Rise Fund on the startup’s $20 million Series B.

Increasingly, Africa — with its large share of the world’s unbanked — and Nigeria — home to the continent’s largest economy and population — have become proving grounds for startups looking to create scalable emerging market finance solutions.

Migo could become a pioneer of sorts by shaping a fintech credit product in Africa with application in frontier, emerging, and developed markets.

“We could actually take this to the U.S. We’ve had discussions with several partners about bringing the the technology to the U.S. and Europe,” said founder Ekechi Nwokah. In the near-term, though, Migo is more likely to expand to Asia, he said.

 

VTEX, an e-commerce platform used by Walmart, raises $140M led by SoftBank’s LatAm fund

E-commerce now accounts for 14% of all retail sales, and its growth has led to a rise in the fortunes of startups that build tools to enable businesses to sell online. In the latest development, a company called VTEX — which originally got its start in Latin America helping companies like Walmart expand their business to new markets with an end-to-end e-commerce service covering things like order and inventory management; front-end customer experience and customer service — has raised $140 million in funding, money that it will be using to continue taking its business deeper into more international markets.

The investment is being led by SoftBank, specifically via its Latin American fund, with participation also from Gávea Investimentos and Constellation Asset Management. Previous investors include Riverwood and Naspers, and Riverwood continues to be a backer, too, the company said.

Mariano Gomide, the CEO who co-founded VTEX with Geraldo Thomaz, said the valuation is not being disclosed, but he confirmed that the founders and founding team continue to hold more than 50% of the company. In addition to Walmart, VTEX customers include Levi’s, Sony, L’Oréal and Motorola . Annually, it processes some $2.4 billion in gross merchandise value across some 2,500 stores, growing 43% per year in the last five years.

VTEX is in that category of tech businesses that has been around for some time — it was founded in 1999 — but has largely been able to operate and grow off its own balance sheet. Before now, it had raised less than $13 million, according to PitchBook data.

This is one of the big rounds to come out of the relatively new SoftBank Innovation Fund, an effort dedicated to investing in tech companies focused on Latin America. The fund was announced earlier this year at $2 billion and has since expanded to $5 billion. Other Latin American companies that SoftBank has backed include online delivery business Rappi, lending platform Creditas, and proptech startup QuintoAndar.

The common theme among many SoftBank investments is a focus on e-commerce in its many forms (whether that’s transactions for loans or to get a pizza delivered) and VTEX is positioned as a platform player that enables a lot of that to happen in the wider marketplace, providing not just the tools to build a front end, but to manage the inventory, ordering and customer relations at the back end.

“VTEX has three attributes that we believe will fuel the company’s success: a strong team culture, a best-in-class product and entrepreneurs with profitability mindset,” said Paulo Passoni, managing investment partner at SoftBank’s Latin America fund, in a statement. “Brands and retailers want reliability and the ability to test their own innovations. VTEX offers both, filling a gap in the market. With VTEX, companies get access to a proven, cloud-native platform with the flexibility to test add-ons in the same data layer.”

Although VTEX has been expanding into markets like the US (where it acquired UniteU earlier this year), the company still makes some 80% of its revenues annually in Latin America, Gomide said in an interview.

There, it has been a key partner to retailers and brands interested in expanding into the region, providing integrations to localise storefronts, a platform to help brands manage customer and marketplace relations, and analytics, competing against the likes of SAP, Oracle, Adobe, and Salesforce (but not, he said in answer to my question, Commercetools, which builds Shopify -style API tools for mid- and large-sized enterprises and itself raised $145 million last month).

E-commerce, as we’ve pointed out before, is a business of economies of scale. Case in point, while VTEX processes some $2.5 billion in transactions annually, it makes a relative small return on that: $69 million, to be exact. This, plus the benefit of analytics on a wider set of big data (another economy of scale play), are two of the big reasons why VTEX is now doubling down on growth in newer markets like Europe and North America. The company now has 122 integrations with localised payment methods.

“At the end of the day, e-commerce software is a combination of knowledge. If you don’t have access to thousands of global cases you can’t imbue the software with knowledge,” Gomide said. “Companies that have been focused on one specific region and now realising that trade is a global thing. China has proven that, so a lot of companies are now coming to us because their existing providers of e-commerce tools can’t ‘do international.'” There are very few companies that can serve that global approach and that is why we are betting on being a global commerce platform, not just one focused on Latin America.”

Chaka opens up global investing to Africa’s most populous nation

Fintech startup Chaka aims to open up online investingd to Africa’s most populous nation, Nigeria.

The seed-stage company recently went live with its mobile-based platform that offers Nigerians stock trading in over 40 countries.

Chaka positions itself as a passport to local and global investing. The startup has created an API and interface that allows Nigerians with a bank account (and who meet KYC requirements) to create trading accounts to purchase global blue chip and local Nigerian stocks.

Investors can get started with as little as 1000 Naira or $10 to create a local and global wallet to trade, according to Chaka founder and CEO Tosin Osibodu.

The platform has partnerships with two brokers to facilitate stock purchases: Citi Investment Capital and U.S. based DriveWealth.

“Embedded in our offer is the ability to buy on the local stock market…we make it more seamless than usual, and assets…from this whole universe outside the continent,” said Osibodu.

The Nigerian Stock Exchange has been upgrading its platform to digitize and accommodate more listings. It has a five-year partnership with NASDAQ and Airtel Africa listed on the NSE in July. 

On the Chaka’s addressable market, “Our outlook is that within Nigeria…between one and two million people are strongly in the market for this product,” Osibodu said.

Tosin Osibodu

Chaka looks to offer more than stocks. “Our product road-map includes not just equities, but other investment products people are interested in — mutual funds, fixed income products, and eventually even cryptocurrencies — so that really expands our bounds,” said Osibodu.

Chaka’s fee structure is 100 Naira (or 3%) for local trades and $4.00 for global trades.

To mitigate the FX risk of the often volatile Nigerian Naira, the startup converts locally to dollars and funds client trades in USD. Chaka agrees to intra-day forward rates at 9am each day and locks them in until 2pm for transactional activity on its platform, according to Osibodu

Chaka hasn’t disclosed amounts, but confirms its has received pre-seed funding from Nigerian founder and investor Iyinoluwa Aboyeji, aka E.

The startup is in a unique position in African fintech. The sector receives the bulk of the continent’s VC (according WeeTracker), but most of it is directed toward P2P payments startups — vs. personal investment platforms.

An alum of U-Penn and Dartmouth, Chaka’s founder got the idea to form the venture, in part, due to challenges attempting to access well-known trading platforms, such as E-Trade.

“I tried to open these accounts and whenever I…disclosed I was Nigerian very shortly after those accounts were closed or denied,” said Osibodu. 

For decades, Nigeria has been known as an originating country for online fraud, commonly referred to as 419 scams. This is something for which the country’s legitimate business operators pay an undue reputational cost, according to Osibodu. 

In recent years, Nigeria has also become a magnet for legitimate business in Africa. The country has the continent’s leading movie and entertainment industry and has emerged as a hotspot for startup formation and VC activity.

Chaka backer Iyinoluwa Aboyeji, who confirmed his investment in the company to TechCrunch, believes progressive trends in Nigeria will open up a new investor class.

In addition to Aboyeji, Chaka has also received seed-funds from Microtraction, a Lagos located early-stage investment shop founded by Yele Bademosi and supported by Y-Combinator CEO Michael Seibel.

Chaka allows for API integrations and has a developer team. The company has created an automated customer verification process. “It sounds trivial compared to the American market, but it’s a bit of a first in Nigeria,” said CEO Tosin Osibodu.

On Chaka’s long-game, “The grand mission of the company is to reduce capital market access barriers,” cording to Osibodu.

“With a two to five million customer base — and a $40 to $200 ARPU — on the really conservative end that’s a $100 million revenue opportunity,” he said.

 

 

 

 

 

 

 

 

 

 

 

 

Senegal’s NIMA Codes to launch address app in 15 African countries

Senegalese startup NIMA Codes — a digital mapping service for locations without formal addresses  —  has upgraded its app and plans to go live in 15 African countries in 2020.

The pre-seed stage startup launched in 2018 around an API that uses mobile-phone numbers to catalog coordinates for unregistered homes and businesses in Senegal.

NIMA Codes is adding a chat tool to its platform, to help users locate and comment on service providers, and is integrating a photo-based location identifier, NIMA Snap, in the application.

“What we offer right now is a reliable street-addressing product. Because it’s very difficult for people…to communicate location in Africa and a lot of services are using location. So we need a service that can communicate reliable locations,” NIMA Codes co-founder and CEO Mouhamadou Sall told TechCrunch.

By several rankings, NIMA Codes has become a top-three downloaded navigation app in Senegal (for Android and iOS). The platform has 16,000 subscribed users and recorded over 100,000 searches, according to Sall.

He and co-founder Steven Sakayroun (a software engineer and IBM alum) came up with idea for assigning location coordinates to mobile numbers in previous software development roles.

“If you look at street addresses in North America, in the end they are just a way to name longitude and latitude, because the computer doesn’t know what 6th Avenue really means,” Sall said.

Since mobile-phone penetration in Senegal and broader Africa is high, mobile numbers serve as a useful reference point to attach location information tagged for both homes and businesses, Sall explained. Mobile-phones can also serve as an entry point for people to input location coordinates to NIMA Codes’ data-base.

There are also advantages to assigning coordinates to digits, vs. letters, in Sub-Saharan Africa with its 1000s of language groupings, Sall explained. “Nima Codes is a cross-border and language agnostic solution,” he said.

Mouhamadou Sall

Sall believes that will work to the startup’s advantage when it expands services and data-base building to all 15 countries of the Economic Community of West African States by the end of 2020.

NIMA Codes is still plotting prospects for its best use-cases and revenue generation. It hasn’t secured partners yet and is still identifying how those downloading the app are using it. “Right now it’s mostly people who download the app…and register locations. Some delivery companies may be using it and not telling us,” said Sall.

Ecowas Countries

The startup plans to generate revenue through partnerships and API usage fees.

Sall believes NIMA Codes’ new image-based location and chat-based business search functions could come together — akin to Google Maps and find nearby places — to create commercial revenue opportunities across merchants in West Africa’s large, informal economies.

Another obvious plug-in for NIMA Codes’ service is Africa’s fast-growing ride-hail and delivery markets. Sall points 2019 data that Uber paid $58 million over three-years for map and search services.The U.S. ride-hail company has also tested an image-based directions app called OKHi in Kenya. And there are reports of Uber’s imminent expansion into Senegal.

Whatever the application, Sall believes NIMA Codes is cornering a central point of demand in Sub-Saharan Africa.

“The use-case is so big, you need to start with something and eventually expand,” he said.

“But everything wraps around having a reliable location service for people and small business.”

Duffel raises $30M led by Index Ventures to disintermediate legacy travel platforms

Huge travel platforms that run airline booking systems like Sabre and Amadeus were invented eons ago and are so large and cumbersome that innovating with them is no easy feat. In the same way that challenger banks have come along to re-invent the banking software Starck, UK startup Duffel has done the same in the travel market, linking up airlines directly with travel agents with a 21st Century platform.

Today it’s announced a $30m Series B funding round from investors Index Ventures, and they were joined by existing investors Benchmark Capital and Blossom Capital . Its airline partners already include American Airlines, British Airways, Lufthansa Group, Aegean Airlines, Vueling, and Iberia.

Duffel will use the new funds to hire more engineers and increase its broader team. It is focusing on expanding in North America and Europe, with its first customers drawn from the US, UK, Canada, France, Germany and Spain.

Duffel enables travel agencies to plug in directly to airlines’ reservation systems via an API so that they can pull real-time flight offers, make bookings, access live seat availability, and buy extra services. This means new digital and mobile app-based travel agencies – Duffel’s target market – can bypass the long lead times and high costs associated with the legacy flight booking systems. They are then able to see live seat availability from some of the world’s biggest airlines, as well as additional offers on in-flight meals or luggage allocations.

Steve Domin, co-founder and CEO of Duffel, said: “A new breed of online agencies want to access reservation systems quickly and seamlessly. By reinventing the underwiring between online agents and airlines we can transform the world of travel booking and reduce barriers to entry for innovative new companies that are offering travelers a whole new way of creating a holiday or trip.”

In the same way that banking systems have been opened up by deregulation, the International Air Transport Association (IATA) created a new industry standard, known as New Distribution Capability (NDC), which transformed the way air products are retailed through the use of modern XML technology. The problem was, the legacy platforms didn’t take much interest. Duffel has obviously come along to take advantage of that.

Jan Hammer, partner at Index Ventures, said: “We are incredibly impressed by the Duffel team, who we have supported since the days of their seed funding. There is an opportunity here to transform the booking experience for travelers and ease many of the pain points in the industry. From the launch of budget airlines to sharing economy businesses like Airbnb, travel has changed and Duffel will provide the tools, built from the ground up, that make the next wave of innovation possible.”

Speaking to TechCrunch, Domin said: “Historically it’s been very hard to sell travel products to agencies. Integrations are hard. There is too much complexity. We are bundling it all into a very simple API and 2 hours later you can have it running on a site or a mobile app.”

“We are connecting directly to airlines’ reservation systems. If you go on a site that uses Duffel, we will forward – to the airline – the right search request, and the airline generates the offer in real-time.”

“Airlines were trying to modernize their booking systems with Amadeus and Sabre but they have not moved quickly on adapting to what the airlines wanted. When the IATA came up with its new XML platform, no-one wanted to use it. So we did.”

Is Duffel a threat to the legacy platforms? “Potentially,” he says, “but I don’t think they see it that way. They don’t see the benefit of engineering and developer experience. In a way, I hope we will be a threat but I don’t think we are right now.”

He said Duffel has future plans to expand to other products like trains and hotels.

Alexa, where are the legal limits on what Amazon can do with my health data?

The contract between the UK’s National Health Service (NHS) and ecommerce giant Amazon — for a health information licensing partnership involving its Alexa voice AI — has been released following a Freedom of Information request.

The government announced the partnership this summer. But the date on the contract, which was published on the gov.uk contracts finder site months after the FOI was filed, shows the open-ended arrangement to funnel nipped-and-tucked health advice from the NHS’ website to Alexa users in audio form was inked back in December 2018.

The contract is between the UK government and Amazon US (Amazon Digital Services, Delaware) — rather than Amazon UK. 

Nor is it a standard NHS Choices content syndication contract. A spokeswoman for the Department of Health and Social Care (DHSC) confirmed the legal agreement uses an Amazon contract template. She told us the department had worked jointly with Amazon to adapt the template to fit the intended use — i.e. access to publicly funded healthcare information from the NHS’ website.

The NHS does make the same information freely available on its website, of course. As well as via API — to some 1,500 organizations. But Amazon is not just any organization; It’s a powerful US platform giant with a massive ecommerce business.

The contract reflects that power imbalance; not being a standard NHS content syndication agreement — but rather DHSC tweaking Amazon’s standard terms.

“It was drawn up between both Amazon UK and the Department for Health and Social Care,” a department spokeswoman told us. “Given that Amazon is in the business of holding standard agreements with content providers they provided the template that was used as the starting point for the discussions but it was drawn up in negotiation with the Department for Health and Social Care, and obviously it was altered to apply to UK law rather than US law.”

In July, when the government officially announced the Alexa-NHS partnership, its PR provided a few sample queries of how Amazon’s voice AI might respond to what it dubbed “NHS-verified” information — such as: “Alexa, how do I treat a migraine?”; “Alexa, what are the symptoms of flu?”; “Alexa, what are the symptoms of chickenpox?”.

But of course as anyone who’s ever googled a health symptom could tell you, the types of stuff people are actually likely to ask Alexa — once they realize they can treat it as an NHS-verified info-dispensing robot, and go down the symptom-querying rabbit hole — is likely to range very far beyond the common cold.

At the official launch of what the government couched as a ‘collaboration’ with Amazon, it explained its decision to allow NHS content to be freely piped through Alexa by suggesting that voice technology has “the potential to reduce the pressure on the NHS and GPs by providing information for common illnesses”.

Its PR cited an unattributed claim that “by 2020, half of all searches are expected to be made through voice-assisted technology”.

This prediction is frequently attributed to ComScore, a media measurement firm that was last month charged with fraud by the SEC. However it actually appears to originate with computer scientist Andrew Ng, from when he was chief scientist at Chinese tech giant Baidu.

Econsultancy noted last year that Mary Meeker included Ng’s claim on a slide in her 2016 Internet Trends report — which is likely how the prediction got so widely amplified.

But on Meeker’s slide you can see that the prediction is in fact “images or speech”, not voice alone…

Screenshot 2019 10 24 at 10.04.40

So it turns out the UK government incorrectly cited a tech giant prediction to push a claim that “voice search has been increasing rapidly” — in turn its justification for funnelling NHS users towards Amazon.

“We want to empower every patient to take better control of their healthcare and technology like this is a great example of how people can access reliable, world-leading NHS advice from the comfort of their home, reducing the pressure on our hardworking GPs and pharmacists,” said health secretary Matt Hancock in a July statement.

Since landing at the health department, the app-loving former digital minister has been pushing a tech-first agenda for transforming the NHS — promising to plug in “healthtech” apps and services, and touting “preventative, predictive and personalised care”. He’s also announced an AI lab housed within a new unit that’s intended to oversee the digitization of the NHS.

Compared with all that, plugging the NHS’ website into Alexa probably seems like an easy ‘on-message’ win. But immediately the collaboration was announced concerns were raised that the government is recklessly mixing the streams of critical (and sensitive) national healthcare infrastructure with the rapacious data-appetite of a foreign tech giant with both an advertising and ecommerce business, plus major ambitions of its own in the healthcare space.

On the latter front, just yesterday news broke of Amazon’s second health-related acquisition: Health Navigator, a startup with an API platform for integrating with health services, such as telemedicine and medical call centers, which offers natural language processing tools for documenting health complaints and care recommendations.

Last year Amazon also picked up online pharmacy PillPack — for just under $1BN. While last month it launched a pilot of a healthcare service offering to its own employees in and around Seattle, called Amazon Care. That looks intended to be a road-test for addressing the broader U.S. market down the line. So the company’s commercial designs on healthcare are becoming increasingly clear.

Returning to the UK, in response to early critical feedback on the Alexa-NHS arrangement, the IT delivery arm of the service, NHS Digital, published a blog post going into more detail about the arrangement — following what it couched as “interesting discussion about the challenges for the NHS of working with large commercial organisations like Amazon”.

A core critical “discussion” point is the question of what Amazon will do with people’s medical voice query data, given the partnership is clearly encouraging people to get used to asking Alexa for health advice.

“We have stuck to the fundamental principle of not agreeing a way of working with Amazon that we would not be willing to consider with any single partner – large or small. We have been careful about data, commercialisation, privacy and liability, and we have spent months working with knowledgeable colleagues to get it right,” NHS Digital claimed in July.

In another section of the blog post, responding to questions about what Amazon will do with the data and “what about privacy”, it further asserted there would be no health profiling of customers — writing:

We have worked with the Amazon team to ensure that we can be totally confident that Amazon is not sharing any of this information with third parties. Amazon has been very clear that it is not selling products or making product recommendations based on this health information, nor is it building a health profile on customers. All information is treated with high confidentiality. Amazon restrict access through multi-factor authentication, services are all encrypted, and regular audits run on their control environment to protect it.

Yet it turns out the contract DHSC signed with Amazon is just a content licensing agreement. There are no terms contained in it concerning what can or can’t be done with the medical voice query data Alexa is collecting with the help of “NHS-verified” information.

Per the contract terms, Amazon is required to attribute content to the NHS when Alexa responds to a query with information from the service’s website. (Though the company says Alexa also makes use of medical content from the Mayo Clinic and Wikipedia.) So, from the user’s point of view, they will at times feel like they’re talking to an NHS-branded service.

But without any legally binding confidentiality clauses around what can be done with their medical voice queries it’s not clear how NHS Digital can confidently assert that Amazon isn’t creating health profiles.

The situation seems to sum to, er, trust Amazon. (NHS Digital wouldn’t comment; saying it’s only responsible for delivery not policy setting, and referring us to the DHSC.)

Asked what it does with medical voice query data generated as a result of the NHS collaboration an Amazon spokesperson told us: “We do not build customer health profiles based on interactions with nhs.uk content or use such requests for marketing purposes.”

But the spokesperson could not point to any legally binding contract clauses in the licensing agreement that restrict what Amazon can do with people’s medical queries.

We’ve also asked the company to confirm whether medical voice queries that return NHS content are being processed in the US.

“This collaboration only provides content already available on the NHS.UK website, and absolutely no personal data is being shared by NHS to Amazon or vice versa,” Amazon also told us, eliding the key point that it’s not NHS data being shared with Amazon but NHS users, reassured by the presence of a trusted public brand, being encouraged to feed Alexa sensitive personal data by asking about their ailments and health concerns.

Bizarrely, the Department of Health and Social Care went further. Its spokeswoman claimed in an email that “there will be no data shared, collected or processed by Amazon and this is just an alternative way of providing readily available information from NHS.UK.”

When we spoke to DHSC on the phone prior to this, to raise the issue of medical voice query data generated via the partnership and fed to Amazon — also asking where in the contract are clauses to protect people’s data — the spokeswoman said she would have to get back to us.

All of which suggests the government has a very vague idea (to put it generously) of how cloud-powered voice AIs function.

Presumably no one at DHSC bothered to read the information on Amazon’s own Alexa privacy page — although the department spokeswomen was at least aware this page existed (because she knew Amazon had pointed us to what she called its “privacy notice”, which she said “sets out how customers are in control of their data and utterances”).

If you do read the page you’ll find Amazon offers some broad-brush explanation there which tells you that after an Alexa device has been woken by its wake word, the AI will “begin recording and sending your request to Amazon’s secure cloud”.

Ergo data is collected and processed. And indeed stored on Amazon’s servers. So, yes, data is ‘shared’.

The more detailed Alexa Internet Privacy Notice, meanwhile, sets out broad-brush parameters to enable Amazon’s reuse of Alexa user data — stating that “the information we learn from users helps us personalize and continually improve your Alexa experience and provide information about Internet trends, website popularity and traffic, and related content”. [emphasis ours]

The DHSC sees the matter very differently, though.

With no contractual binds covering health-related queries UK users of Alexa are being encouraged to whisper into Amazon’s robotic ears — data that’s naturally linked to Alexa and Amazon account IDs (and which the Alexa Internet Privacy Notice also specifies can be accessed by “a limited number of employees”) — the government is accepting the tech giant’s standard data processing terms for a commercial, consumer product which is deeply integrated into its increasingly sprawling business empire.

Terms such as indefinite retention of audio recordings — unless users pro-actively request that they are deleted. And even then Amazon admitted this summer it doesn’t always delete the text transcripts of recordings. So even if you keep deleting all your audio snippets, traces of medical queries may well remain on Amazon’s servers.

Earlier this year it also emerged the company employs contractors around the world to listen in to Alexa recordings as part of internal efforts to improve the performance of the AI.

A number of tech giants recently admitted to the presence of such ‘speech grading’ programs, as they’re sometimes called — though none had been up front and transparent about the fact their shiny AIs needed an army of external human eavesdroppers to pull off a show of faux intelligence.

It’s been journalists highlighting the privacy risks for users of AI assistants; and media exposure leading to public pressure on tech giants to force changes to concealed internal processes that have, by default, treated people’s information as an owned commodity that exists to serve and reserve their own corporate interests.

Data protection? Only if you interpret the term as meaning your personal data is theirs to capture and that they’ll aggressively defend the IP they generate from it.

So, in other words, actual humans — both employed by Amazon directly and not — may be listening to the medical stuff you’re telling Alexa. Unless the user finds and activates a recently added ‘no human review’ option buried in Alexa settings.

Many of these arrangements remain under regulatory scrutiny in Europe. Amazon’s lead data protection regulator in Europe confirmed in August it’s in discussions with it over concerns related to its manual reviews of Alexa recordings. So UK citizens — whose taxes fund the NHS — might be forgiven for expecting more care from their own government around such a ‘collaboration’.

Rather than a wholesale swallowing of tech giant T&Cs in exchange for free access to the NHS brand and  “NHS-verified” information which helps Amazon burnish Alexa’s utility and credibility, allowing it to gather valuable insights for its commercial healthcare ambitions.

To date there has been no recognition from DHSC the government has a duty of care towards NHS users as regards potential risks its content partnership might generate as Alexa harvests their voice queries via a commercial conduit that only affords users very partial controls over what happens to their personal data.

Nor is DHSC considering the value being generously gifted by the state to Amazon — in exchange for a vague supposition that a few citizens might go to the doctor a bit less if a robot tells them what flu symptoms look like.

“The NHS logo is supposed to mean something,” says Sam Smith, coordinator at patient data privacy advocacy group, MedConfidential — one of the organizations that makes use of the NHS’ free APIs for health content (but which he points out did not write its own contract for the government to sign).

“When DHSC signed Amazon’s template contract to put the NHS logo on anything Amazon chooses to do, it left patients to fend for themselves against the business model of Amazon in America.”

In a related development this week, Europe’s data protection supervisor has warned of serious data protection concerns related to standard contracts EU institutions have inked with another tech giant, Microsoft, to use its software and services.

The watchdog recently created a strategic forum that’s intended to bring together the region’s public administrations to work on drawing up standard contracts with fairer terms for the public sector — to shrink the risk of institutions feeling outgunned and pressured into accepting T&Cs written by the same few powerful tech providers.

Such an effort is sorely needed — though it comes too late to hand-hold the UK government into striking more patient-sensitive terms with Amazon US.

Facebook rolls out new video tools, plus Instagram and IGTV scheduling feature

Facebook on Monday announced a number of updates aimed at video creators and publishers, during a session at the International Broadcasting Convention (IBC) taking place in Amsterdam. The updates involve changes to live video broadcasting, Facebook’s Watch Party, and Creator Studio, and they include enhancements to tools, expanded feature sets, and improved analytics, among other things.

The highlights include better ways to prep for and simulcast live broadcasts, ways to take better advantage of Watch Party events, new metrics to track video performance, and a much-anticipated option to schedule Instagram/IGTV content for up to six months’ in advance.

Live Video

facebook live studio

In terms of live video, Facebook says it listened to feedback from those who have been broadcasting live on its platform, and is now rolling out several highly-requested features to Facebook Pages (not Profiles.) The changes are an attempt to better accommodate professional broadcasters who want to use Facebook’s live broadcasting capabilities instead of or in addition to other platforms, like YouTube.

Through the Live API, publishers can now use a “rehearsal” feature to broadcast live only to Page admins and editors in order to test new production setups, interactive features, and show formats before going live to a full audience. QVC has tested this feature, as they broadcast live on Facebook for hundreds of hours per month, and have wanted to try out new workflows and formats.

Publishers will also be able to trim the beginning and end of a live video, and can live broadcast for as long as 8 hours — double the previous limit of 4 hours.

This latter capability has already been used by NASA, who broadcast an 8-hour long spacewalk, for example, and it also leaves room for broadcasting things like live sports, news events, and Twitch-like gaming broadcasts.

Most notably, perhaps, is that the company realizes live broadcasters need to serve their audiences outside of Facebook. Now, publishers will be able to use apps that let them stream to more than one streaming service at once, by simulcasting via the Live API.

Live video recently rolled out to Facebook Lite, as well, the company also noted.

watch party facebook

Watch Party

Facebook additionally announced a few new updates for its co-watching feature, Watch Party, which include the ability for Pages to schedule a party in advance to build anticipation, support for “replays” that will let others enjoy the video after airing, the ability to tag business partners in branded content, and new analytics.

As for the latter, two new metrics are being added to Creator Studio: Minutes Viewed and Unique 60s Viewers (total number of unique users that watched at least 60 seconds in a Watch Party.) These complement existing metrics like reach and engagement.

The Live Commenting feature, which allows a host to go live in a Watch Party to share their own commentary, is also now globally available.

Creator Studio

And wrapping all this up is an update to Creator Studio, which is what publishers use to post, manage, monetize and measure their content across both Facebook and Instagram.

Creator Studio Loyalty

The dashboard will soon add a new visualization layer in Loyalty Insights to help creators see which videos loyal fans want to see, by measuring which videos drive return viewers.

A new Distribution metric will score each video’s performance based on the Page’s historic average on a range of metrics, including: 1 Minute Views, Average Minutes Watched, and Retention. This feature, rolling out in the next few months, will offer an easy-to-read snapshot of a video’s performance.

Creator Studio Distribution

Creator Studio will also now support 13 more languages for auto-captioning: Arabic, Chinese, German, Hindi, Italian, Malay, Russian, Tagalog, Tamil, Thai, Turkish, Urdu, and Vietnamese. These are in addition to those languages already available, which included English, French, Portuguese and Spanish.

Instagram & IGTV Scheduling 

And finally, publishers and creators will be able to publish and schedule their Instagram Feed and IGTV content for up to 6 months. In a few more months, Instagram Feed and IGTV drafting and editing will also become available, the company says.

This feature was already spotted in the wild before today’s announcement, and sent the social media management and influencer community abuzz. It also follows an update to the Instagram API last year to allow scheduling by third-party applications. However, a native feature is not as limited as some of those other options.

The feature is now open to all creators and publishers with Facebook Pages, whereas before some were seeing it labeled only as “coming soon” or were not able to get it working. Story scheduling is not yet included here, but it wouldn’t be surprising to see it added further down the road.

 

 

Web feature developers told to dial up attention on privacy and security

Web feature developers are being warned to step up attention to privacy and security as they design contributions.

Writing in a blog post about “evolving threats” to Internet users’ privacy and security, the W3C standards body’s technical architecture group (TAG) and Privacy Interest Group (PING) set out a series of revisions to the W3C’s Security and Privacy Questionnaire for web feature developers.

The questionnaire itself is not new. But the latest updates place greater emphasis on the need for contributors to assess and mitigate privacy impacts, with developers warned that “features may not be implemented if risks are found impossible or unsatisfactorily mitigated”.

In the blog post, independent researcher Lukasz Olejnik, currently serving as an invited expert at the W3C TAG; and Apple’s Jason Novak, representing the PING, write that the intent with the update is to make it “clear that feature developers should consider security and privacy early in the feature’s lifecycle” [emphasis theirs].

“The TAG will be carefully considering the security and privacy of a feature in their design reviews,” they further warn, adding: “A security and privacy considerations section of a specification is more than answers to the questionnaire.”

The revisions to the questionnaire include updates to the threat model and specific threats a specification author should consider — including a new high level type of threat dubbed “legitimate misuse“, where the document stipulates that: “When designing a specification with security and privacy in mind, all both use and misuse cases should be in scope.”

“Including this threat into the Security and Privacy Questionnaire is meant to highlight that just because a feature is possible does not mean that the feature should necessarily be developed, particularly if the benefitting audience is outnumbered by the adversely impacted audience, especially in the long term,” they write. “As a result, one mitigation for the privacy impact of a feature is for a user agent to drop the feature (or not implement it).”

Features should be secure and private by default and issues mitigated in their design,” they further emphasize. “User agents should not be afraid of undermining their users’ privacy by implementing new web standards or need to resort to breaking specifications in implementation to preserve user privacy.”

The pair also urge specification authors to avoid blanket treatment of first and third parties, suggesting: “Specification authors may want to consider first and third parties separately in their feature to protect user security and privacy.”

The revisions to the questionnaire come at a time when browser makers are dialling up their response to privacy threats — encouraged by rising public awareness of the risks posed by data leaks, as well as increased regulatory action on data protection.

Last month the open source WebKit browser engine (which underpins Apple’s Safari browser) announced a new tracking prevention policy that takes the strictest line yet on background and cross-site tracking, saying it would treat attempts to circumvent the policy as akin to hacking — essentially putting privacy protection on a par with security.

Earlier this month Mozilla also pushed out an update to its Firefox browser that enables an anti-tracking cookie feature across the board, for existing users too — demoting third party cookies to default junk.

Even Google’s Chrome browser has made some tentative steps towards enhancing privacy — announcing changes to how it handles cookies earlier this year. Though the adtech giant has studiously avoided flipping on privacy by default in Chrome where third party tracking cookies are concerned, leading to accusations that the move is mostly privacy-washing.

More recently Google announced a long term plan to involve its Chromium browser engine in developing a new open standard for privacy — sparking concerns it’s trying to both kick the can on privacy protection and muddy the waters by shaping and pushing self-interested definitions which align with its core data-mining business interests.

There’s more activity to consider too. Earlier this year another data-mining adtech giant, Facebook, made its first major API contribution to Google’s Chrome browser — which it also brought to the W3C Performance Working Group.

Facebook does not have its own browser, of course. Which means that authoring contributions to web technologies offers the company an alternative conduit to try to influence Internet architecture in its favor.

The W3C TAG’s latest move to focus minds on privacy and security by default is timely.

It chimes with a wider industry shift towards pro-actively defending user data, and should rule out any rubberstamping of tech giants contributions to Internet architecture which is obviously a good thing. Scrutiny remains the best defence against self-interest.

Clubhouse announces new collaboration tool and free version of its project management platform

Clubhouse — the software project management platform focused on team collaboration, workflow transparency and ease of integration — is taking another big step towards its goal of democratizing efficient software development.

Traditionally, legacy project management programs in software development can often appear like an engineer feeding frenzy around a clunky stack of to-dos. Engineers have limited clarity into the work being done by other members of their team or into project tasks that fall outside of their own silo.

Clubhouse has long been focused on easing the headaches of software development workflows by providing full visibility into the status of specific tasks, the work being done by all team members across a project, as well as higher-level project plans and goals. Clubhouse also offers easy integration with other development tools as well as its own API to better support the cross-functionality a new user may want.

Today, Clubhouse released a free version of its project management platform, that offers teams of up to 10 people unlimited access to the product’s full suite of features, as well as unlimited app integrations.

The company also announced it will be launching an engineer focused collaboration and documentation tool later this year, that will be fully integrated with the Clubhouse project management product. The new product dubbed “Clubhouse Write” is currently in beta, but will allow development teams to collaborate, organize and comment on project documentation in real-time, enabling further inter-team communication and a more open workflow.

The broader mission behind the Clubhouse Write tool and the core product’s free plan is to support more key functions in the development process for more people, ultimately making it easier for anyone to start dynamic and distributed software teams and ideate on projects.

write screenshot

“Clubhouse Write” Beta Version. Image via Clubhouse

In an interview with TechCrunch, Clubhouse also discussed how the offerings will provide key competitive positioning against larger incumbents in the software project management space. Clubhouse has long competed with Atlassian’s project management tool “Jira”, but now the company is doubling down by launching Clubhouse Write which will compete head-on with Atlassian’s team collaboration product Confluence.

According to recent Atlassian investor presentations, Jira and Confluence make up the lion’s share of the Atlassian’s business and revenues. And with Atlassian’s market capitalization of ~$30 billion, Clubhouse has its sights set on what it views as a significant market share opportunity.

According to Clubhouse, the company believes it’s in pole position to capture a serious chunk of Atlassian’s foothold given it designed its two products to have tighter integration than the legacy platforms, and since Clubhouse is essentially providing free versions of what many are already paying for to date.

And while Atlassian is far from the only competitor in the cluttered project management space, few if any competing platforms are offering a full project tool kit for free, according to the company. Clubhouse is also encouraged by the strong support it has received from the engineering community to date. In a previous interview with TechCrunch’s Danny Crichton, the company told TechCrunch it had reached at least 700 enterprise customers using the platform before hiring any sales reps, and users of the platform already include Nubank, Dataiku, and Atrium amongst thousands of others.

Clubhouse has ambitious plans to further expand its footprint, having raised $16 million to date through its Series A according to Crunchbase, with investments from a long list of Silicon Valley mainstays including Battery Ventures, Resolute Ventures, Lerer Hippeau, RRE Ventures, BoxGroup, and others.

A former CTO himself, Clubhouse cofounder and CEO Kurt Schrader is intimately familiar with the opacity in product development that frustrates engineers and complicates release schedules. Schrader and Clubhouse CMO Mitch Wainer believe Clubhouse can maintain its organic growth by that staying hyperfocused on designing for project managers and creating simple workflows that keep engineers happy. According to Schrader, the company ultimately wants to be the “default [destination] for modern software teams to plan and build software.”

“Clubhouse is the best software project management app in the world,” he said. “We want all teams to have access to a world-class tool from day one whether it’s a 5 or 5,000 person team.”

Web host Hostinger says data breach may affect 14 million customers

Hostinger said it has reset user passwords as a “precautionary measure” after it detected unauthorized access to a database containing information on millions of its customers.

The breach is said to have happened on Thursday. The company said in a blog post it received an alert that one of its servers was improperly accessed. Using an access token found on the server, which can give access to systems without needing a username or a password, the hacker gained further access to the company’s systems, including an API database containing customer usernames, email addresses, and scrambled passwords.

Hostinger said the API database stored about 14 million customers records. The company has more than 29 million customers on its books.

“We have restricted the vulnerable system, and such access is no longer available,” said Daugirdas Jankus, Hostinger’s chief marketing officer.

“We are in contact with the respective authorities,” said Jankus.

hostinger

An email from Hostinger explaining the data breach. (Image: supplied)

News of the breach broke overnight. According to the company’s status page, affected customers have already received an email to reset their passwords.

The company said that financial data wasn’t taken in the breach, nor was customer website files or data affected.

But one customer who was affected by the breach accused the company of being potentially “misleading” about the scope of the breach.

A chat log seen by TechCrunch shows a customer support representative telling the customer it was “correct” that customers’ financial data can be retrieved by the API but that the company does “not store any payment data.” Hostinger uses multiple payment processors, the representative told the customer, but did not name them.

“They say they do not store payment details locally, but they have an API that can pull this information from the payment processor and the attacker had access to it,” the customer told TechCrunch.

We’ve reached out to Hostinger for more, but a spokesperson didn’t immediately comment when reached by TechCrunch.

Related stories: