Tala, a Santa Monica, California-headquartered startup that creates a credit profile to provide uncollateralized loans to millions of people in emerging markets, has raised $110 million in a new financing round to enter India’s burgeoning fintech space.
The Series D financing for the five year-old startup was led by RPS Ventures, with GGV Capital and previous investors IVP, Revolution Growth, Lowercase Capital, Data Collective VC, ThomVest Ventures, and PayPal also participating in the round.
The new round, which takes the startup’s total fundraising to $215 million, valued it above $500 million, a person familiar with the matter told TechCrunch. Tala has also raised an additional $100 million in debt, including a $50 million facility led by Colchis in last one year.
The startup, which employs more than 550 people, will use the new capital to enter India, Siroya, who built Tala after interviewing thousands of small and micro-businesses, said. In the run up to launch in India, Tala began a 12-month pilot program in the country last year to conduct user research and understand the market. It has also set up a technology hub in Bangalore, she said.
Shivani Siroya (Tala CEO) at TechCrunch Disrupt NY 2017
“The opportunity is very massive in India, so we spent some time customizing our service for the local market,” she said.
According to World Bank, more than 2 billion people globally have limited access to financial services and working capital. For these people, many of whom live in India, securing a small size loan is extremely challenging as they don’t have a credit score.
In recent years, several major digital payment platforms in India including Paytm and MobiKwik have started to offer small loans to users. Traditional banks are still lagging to serve this segment, industry executives say.
Tala goes a step further and takes liability for any unpaid returns, Siroya said. More than 90% of Tala customers pay back their loan in 20 to 30 days and are recurring customers, she added.
The startup also forwards the positive credit history and rankings to the local credit bureaus to help people secure bigger and long-term loans in the future, she added.
Tala, which charges a one-time fee that is as low as 5% for each loan, relies on referrals, and some marketing through radio and television to acquire new customers. “But a lot of these users come because they heard about us from their friends,” Siryoa said.
As part of the new financing round, Kabir Misra, Founding General Partner of RPS Ventures, has joined Tata’s board of directors, the startup said.
Tata will also use a portion of its new fund to expand its footprint and team in its existing markets — East Africa, Mexico, and the Philippines — and also build new solutions.
Siroya said the startup has identified some more markets where it plans to enter next. She did not disclose the names, but said she is eyeing more countries in South Asia and Latin America.
The process of using the Stoic journaling app is simple: You open the app in the morning and the evening, when you’ll be prompted to answer a couple of questions and perform a few simple exercises.
For example, this evening the app asked me to rate my current level of fulfillment and to identify what made me smile today, while also pointing me to guided exercises like journaling and breathing.
Stoic is part of the current batch of startups at Y Combinator (it’s taking the stage today at Demo Day). Founder Maciej Lobodzinski told me that his goal is to help users understand the different factors influencing their mental and emotional state.
“The core of the app is: We have this insight and we see what influences your mood and what you feel,” Lobodzinski said. He suggested that this is very different from the “super transactional” idea embedded in my other mental health and wellness apps, where “you pay for my app and you feel better.” In his view, “You should feel how you feel. It’s okay, how you feel, but you should know why you are feeling this way.”
So once there are a couple of weeks of data in the app, you should be able to look back and see how you were feeling on a certain day, and if there were activities that made you feel more or less fulfilled. Over time, Lobodzinski hopes to add more insights about “what influenced you, why you feel this way, why you are productive.”
“It’s an extremely practical framework,” he said. “When I talk to users, there are entrepreneurs, investors, traders — people who found out about the app because they were looking for how to deal with their stress …
If you are stressed with your everyday life and you can get the advice of the emperor of Rome, who dealt with much more serious things, it’s amazing how much better you can feel after that.”
At the same time, users have the option to receive quotes from different schools of thought — not just Stoicism but also Buddhism, Taoism and Catholicism. For some users, their app experience won’t be explicitly focused on Stoicism, but Lobodzinski said that even then, it forms the “spine” of the app’s approach.
The basic app is free, but Stoic charges $27.99 per year for a premium version that includes iCloud syncing and additional content.
Google Go, a lightweight version of Google’s search app, is today becoming available to all Android users worldwide. First launched in 2017 after months of beta testing, the app had been designed primarily for use in emerging markets where people are often accessing the internet for the first time on unstable connections by way of low-end Android devices.
Like many of the “Lite” versions of apps built for emerging markets, Google Go takes up less space on phones — now at just over 7MB — and it includes offline features to aid those with slow and intermittent internet connections. The app’s search results are optimized to save up to 40% data, Google also claims.
Beyond web search, Google Go includes other discovery features, as well — like the ability to tap through trending topics, voice search, image and GIF search, an easy way to switch between languages, and the ability to have web pages read aloud, powered by AI.
Lens allows users to point their smartphone camera at real-world objects in order to bring up relevant information. In Google Go, the Lens feature will help users who struggle to read. When the camera is pointed at text — like a bus schedule, sign or bank form, for example — Lens can read the text out loud, highlighting the words as they’re spoken. Users can also tap on a particular word to learn its definition or have the text translated.
While Lens was only a 100KB addition, according to Google, the updates to the Go app since launch have increased its size. Initially, it was a 5MB app; now it’s a little more than 7MB.
Previously, Google Go was only available in a few countries on Android Go edition devices. According to data from Sensor Tower, it has been installed approximately 17.5 million times globally, with the largest percentage of users in India (48%). Its next largest markets are Indonesia (16%), Brazil (14%), Nigeria (6%) and South Africa (4%), Sensor Tower says.
In total, it has been made available to 29 countries on Android Go edition devices, including: Angola, Benin, Botswana, Burkina Faso, Cameroon, Cape Verde, Cote d’Ivoire, Gabon, Guinea-Bissau, Kenya, Mali, Mauritius, Mozambique, Namibia, Niger, Nigeria, Philippines, Rwanda, Senegal, Tanzania, Togo, Uganda, Zambia and Zimbabwe.
Google says the app now has “millions” of users.
Today, Google says it will be available to all users worldwide on the Play Store.
Google says it decided to launch the app globally, including in markets where bandwidth is not a concern, because it understands that everyone at times can struggle with problems like limited phone storage or spotty connections.
Plus, it’s a lightweight app for reading and translating text. At Google I/O, the company had noted there are more than 800 million adults worldwide who struggle to read — and, of course, not all are located in emerging markets.
Google Go is one of many lightweight apps Google has built for emerging markets, along with YouTube Go, Files Go, Gmail Go, Google Maps Go, Gallery Go and Google Assistant Go, for example.
Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The “Wafer Scale Engine” is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).
Cerebras’ Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems)
Superlatives aside though, the technical challenges that Cerebras had to overcome to reach this milestone I think is the more interesting story here. I sat down with founder and CEO Andrew Feldman this afternoon to discuss what his 173 engineers have been building quietly just down the street here these past few years with $112 million in venture capital funding from Benchmark and others.
Going big means nothing but challenges
First, a quick background on how the chips that power your phones and computers get made. Fabs like TSMC take standard-sized silicon wafers and divide them into individual chips by using light to etch the transistors into the chip. Wafers are circles and chips are squares, and so there is some basic geometry involved in subdividing that circle into a clear array of individual chips.
One big challenge in this lithography process is that errors can creep into the manufacturing process, requiring extensive testing to verify quality and forcing fabs to throw away poorly performing chips. The smaller and more compact the chip, the less likely any individual chip will be inoperative, and the higher the yield for the fab. Higher yield equals higher profits.
Cerebras throws out the idea of etching a bunch of individual chips onto a single wafer in lieu of just using the whole wafer itself as one gigantic chip. That allows all of those individual cores to connect with one another directly — vastly speeding up the critical feedback loops used in deep learning algorithms — but comes at the cost of huge manufacturing and design challenges to create and manage these chips.
Cerebras’ technical architecture and design was led by co-founder Sean Lie. Feldman and Lie worked together on a previous startup called SeaMicro, which sold to AMD in 2012 for $334 million. (Via Cerebras Systems)
The first challenge the team ran into according to Feldman was handling communication across the “scribe lines.” While Cerebras chip encompasses a full wafer, today’s lithography equipment still has to act like there are individual chips being etched into the silicon wafer. So the company had to invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer. Working with TSMC, they not only invented new channels for communication, but also had to write new software to handle chips with trillion plus transistors.
The second challenge was yield. With a chip covering an entire silicon wafer, a single imperfection in the etching of that wafer could render the entire chip inoperative. This has been the block for decades on whole wafer technology: due to the laws of physics, it is essentially impossible to etch a trillion transistors with perfect accuracy repeatedly.
Cerebras approached the problem using redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer. “You have to hold only 1%, 1.5% of these guys aside,” Feldman explained to me. Leaving extra cores allows the chip to essentially self-heal, routing around the lithography error and making a whole wafer silicon chip viable.
Entering uncharted territory in chip design
Those first two challenges — communicating across the scribe lines between chips and handling yield — have flummoxed chip designers studying whole wafer chips for decades. But they were known problems, and Feldman said that they were actually easier to solve that expected by re-approaching them using modern tools.
He likens the challenge though to climbing Mount Everest. “It’s like the first set of guys failed to climb Mount Everest, they said, ‘Shit, that first part is really hard.’ And then the next set came along and said ‘That shit was nothing. That last hundred yards, that’s a problem.’”
And indeed, the toughest challenges according to Feldman for Cerebras were the next three, since no other chip designer had gotten past the scribe line communication and yield challenges to actually find what happened next.
The third challenge Cerebras confronted was handling thermal expansion. Chips get extremely hot in operation, but different materials expand at different rates. That means the connectors tethering a chip to its motherboard also need to thermally expand at precisely the same rate lest cracks develop between the two.
Feldman said that “How do you get a connector that can withstand [that]? Nobody had ever done that before, [and so] we had to invent a material. So we have PhDs in material science, [and] we had to invent a material that could absorb some of that difference.”
Once a chip is manufactured, it needs to be tested and packaged for shipment to original equipment manufacturers (OEMs) who add the chips into the products used by end customers (whether data centers or consumer laptops). There is a challenge though: absolutely nothing on the market is designed to handle a whole-wafer chip.
Cerebras designed its own testing and packaging system to handle its chip (Via Cerebras Systems)
“How on earth do you package it? Well, the answer is you invent a lot of shit. That is the truth. Nobody had a printed circuit board this size. Nobody had connectors. Nobody had a cold plate. Nobody had tools. Nobody had tools to align them. Nobody had tools to handle them. Nobody had any software to test,” Feldman explained. “And so we have designed this whole manufacturing flow, because nobody has ever done it.” Cerebras’ technology is much more than just the chip it sells — it also includes all of the associated machinery required to actually manufacture and package those chips.
Finally, all that processing power in one chip requires immense power and cooling. Cerebras’ chip uses 15 kilowatts of power to operate — a prodigious amount of power for an individual chip, although relatively comparable to a modern-sized AI cluster. All that power also needs to be cooled, and Cerebras had to design a new way to deliver both for such a large chip.
It essentially approached the problem by turning the chip on its side, in what Feldman called “using the Z-dimension.” The idea was that rather than trying to move power and cooling horizontally across the chip as is traditional, power and cooling are delivered vertically at all points across the chip, ensuring even and consistent access to both.
And so, those were the next three challenges — thermal expansion, packaging, and power/cooling — that the company has worked around-the-clock to deliver these past few years.
From theory to reality
Cerebras has a demo chip (I saw one, and yes, it is roughly the size of my head), and it has started to deliver prototypes to customers according to reports. The big challenge though as with all new chips is scaling production to meet customer demand.
For Cerebras, the situation is a bit unusual. Since it places so much computing power on one wafer, customers don’t necessarily need to buy dozens or hundreds of chips and stitch them together to create a compute cluster. Instead, they may only need a handful of Cerebras chips for their deep-learning needs. The company’s next major phase is to reach scale and ensure a steady delivery of its chips, which it packages as a whole system “appliance” that also includes its proprietary cooling technology.
Expect to hear more details of Cerebras technology in the coming months, particularly as the fight over the future of deep learning processing workflows continues to heat up.
The chat fiction stories offered in Mammoth Media‘s mobile app Yarn are about to get more interactive.
The branching narrative mechanic should be familiar to anyone who read Choose Your Own Adventure books when they were kids — you read a story, and at certain key moments, you choose from different options that determine where the plot will go next.
One thing you probably won’t recognize from your childhood reading is the fact that some of these choices aren’t free — to select them, you’ll need to spend money in the form of Yarn’s new virtual currency, gems.
Mammoth founder and CEO Benoit Vatere explained that in those cases, there might be two choices that you can select for free, plus a third that you need to pay for. Usually, it will be something that accelerates the story or sends it off in a new direction — in a horror story, you could get the option to stab someone, or in a romance story, your character could get the option to go home with someone.
Vatere added, “It’s not only being able to have a different branch in the story, but being able to play as a different character lead … Instead of being the male character, would they like to be the female character and really see a different perspective?”
He acknowledged that some of Yarn’s paying subscribers might be cranky about being asked to pay more, but he said the goal is that those subscribers can have “a full experience” without having to buy additional gems.
Yarn is launching interactive stories with titles including “Blue Ivy’s Nanny,” where it’s your first day on the job as Beyoncé’s nanny (I’m going to go ahead and guess that Rivera worked on this one); a romance story called “Playing the Field”; a horror story called “Haunted Camper” and a drama called “Trapped.” Vatere also said there are plans for branched narratives tying into existing Yarn franchises, and set in the world of Archie Comics.
Overall, Vatere said he’s hoping that this will lead to more engagement from Yarn readers, while also opening up new opportunities for monetization.
“Subscription is a great model, but subscription has a cap,” he said. That’s why Mammoth is experimenting with virtual currency, and why it plans to make these stories available to non-subscribers.
A group of app developers have penned a letter to Apple CEO Tim Cook, arguing that certain privacy-focused changes to Apple’s iOS 13 operating system will hurt their business. In a report by The Information, the developers were said to have accused Apple of anti-competitive behavior when it comes to how apps can access user location data.
With iOS 13, Apple aims to curtail apps’ abuse of its location-tracking features as part of its larger privacy focus as a company.
Today, many apps ask users upon first launch to give their app the “Always Allow” location-tracking permission. Users can confirm this with a tap, unwittingly giving apps far more access to their location data than is actually necessary, in many cases.
There will now be a new option upon launch presented to users, “Allow Once,” which allows users to first explore the app to see if it fits their needs before granting the app developer the ability to continually access location data. This option will be presented alongside existing options, “Allow While Using App” and “Don’t Allow.”
The app developers argue that this change may confuse less technical users, who will assume the app isn’t functioning properly unless they figure out how to change their iOS Settings to ensure the app has the proper permissions.
The developers’ argument is a valid assessment of user behavior and how such a change could impact their apps. The added friction of having to go to Settings in order to toggle a switch so an app to function can cause users to abandon apps. It’s also, in part, why apps like Safari ad blockers and iOS replacement keyboards never really went mainstream, as they require extra steps involving the iOS Settings.
That said, the changes Apple is rolling out with iOS 13 don’t actually break these apps entirely. They just require the apps to refine their onboarding instructions to users. Instead of asking for the “Always Allow” permission, they will need to point users to the iOS Settings screen, or limit the apps’ functionality until it’s granted the “Always Allow” permission.
In addition, the developers’ letter pointed out that Apple’s own built-in apps (like Find My) aren’t treated like this, which raises anti-competitive concerns.
The letter also noted that Apple in iOS 13 would not allow developers to use PushKit for any other purpose beyond internet voice calls — again, due to the fact that some developers abused this toolkit to collect private user data.
“We understand that there were certain developers, specifically messaging apps, that were using this as a backdoor to collect user data,” the email said, according to the report. “While we agree loopholes like this should be closed, the current Apple plan to remove [access to the internet voice feature] will have unintended consequences: it will effectively shut down apps that have a valid need for real-time location.”
The letter was signed by Tile CEO CJ Prober; Arity (Allstate) President Gary Hallgren; CEO of Life360 Chris Hullsan; CEO of dating app Happn Didier Rappaport; CEO of Zenly (Snap), Antoine Martin; , CEO of Zendrive, Jonathan Matus which; and chief strategy officer of social networking app Twenty, Jared Allgood.
Apple responded to The Information by saying that any changes it makes to the operating system are “in service to the user” and to their privacy. It also noted that any apps it distributes from the App Store have to abide by the same procedures.
It’s another example of how erring on the side of increased user privacy can lead to complications and friction for end users. One possible solution could be allowing apps to present their own in-app Settings screen where users could toggle the app’s full set of permissions directly — including everything from location data to push notifications to the app’s use of cellular data or Bluetooth sharing.
It’s true, you’ve got the Galaxy Note to thank for your big phone. When the device hit the scene at IFA 2011, large screens were still a punchline. That same year, Steve Jobs famously joked about phones with screens larger than four inches, telling a crowd of reporters, “nobody’s going to buy that.”
In 2019, the average screen size hovers around 5.5 inches. That’s a touch larger than the original Note’s 5.3 inches — a size that was pretty widely mocked by much of the industry press at the time. Of course, much of the mainstreaming of larger phones comes courtesy of a much improved screen to body ratio, another place where Samsung has continued to lead the way.
In some sense, the Note has been doomed by its own success. As the rest of the industry caught up, the line blended into the background. Samsung didn’t do the product any favors by dropping the pretense of distinction between the Note and its Galaxy S line.
Ultimately, the two products served as an opportunity to have a six-month refresh cycle for its flagships. Samsung, of course, has been hit with the same sort of malaise as the rest of the industry. The smartphone market isn’t the unstoppable machine it appeared to be two or three years back.
Like the rest of the industry, the company painted itself into a corner with the smartphone race, creating flagships good enough to convince users to hold onto them for an extra year or two, greatly slowing the upgrade cycle in the process. Ever-inflating prices have also been a part of smartphone sales stagnation — something Samsung and the Note are as guilty of as any.
So what’s a poor smartphone manufacturer to do? The Note 10 represents baby steps. As it did with the S line recently, Samsung is now offering two models. The base Note 10 represents a rare step backward in terms of screen size, shrinking down slightly from 6.4 to 6.3 inches, while reducing resolution from Quad HD to Full HD.
The seemingly regressive step lets Samsung come in a bit under last year’s jaw dropping $1,000. The new Note is only $50 cheaper, but moving from four to three figures may have a positive psychological effect for wary buyers. While the slightly smaller screen coupled with a better screen to body ratio means a device that’s surprisingly slim.
If anything, the Note 10+ feels like the true successor to the Note line. The baseline device could have just as well been labeled the Note 10 Lite. That’s something Samsung is keenly aware of, as it targets first-time Note users with the 10 and true believers with the 10+. In both cases, Samsung is faced with the same task as the rest of the industry: offering a compelling reason for users to upgrade.
Earlier this week, a Note 9 owner asked me whether the new device warrants an upgrade. The answer is, of course, no. The pace of smartphone innovation has slowed, even as prices have risen. Honestly, the 10 doesn’t really offer that many compelling reasons to upgrade from the Note 8.
That’s not a slight against Samsung or the Note, per se. If anything, it’s a reflection on the fact that these phones are quite good — and have been for a while. Anecdotally, industry excitement around these devices has been tapering for a while now, and the device’s launch in the midst of the doldrums of August likely didn’t help much.
The past few years have seen smartphones transform from coveted, bleeding-edge luxury to necessity. The good news to that end, however, is that the Note continues to be among the best devices out there.
The common refrain in the earliest days of the phablet was the inability to wrap one’s fingers around the device. It’s a pragmatic issue. Certainly you don’t want to use a phone day to day that’s impossible to hold. But Samsung’s remarkable job of improving screen to body ratio continues here. In fact, the 6.8-inch Note 10+ has roughly the same footprint as the 6.4-inch Note 9.
The issue will still persist for those with smaller hands — though thankfully Samsung’s got a solution for them in the Note 10. For the rest of us, the Note 10+ is easily held in one hand and slipped in and out of pants pockets. I realize these seem like weird things to say at this point, but I assure you they were legitimate concerns in the earliest days of the phablet, when these things were giant hunks of plastic and glass.
Samsung’s curved display once again does much of the heavy lifting here, allowing the screen to stretch nearly from side to side with only a little bezel at the edge. Up top is a hole-punch camera — that’s “Infinity O” to you. Those with keen eyes no doubt immediately noticed that Samsung has dropped the dual selfie camera here, moving toward the more popular hole-punch camera.
The company’s reasoning for this was both aesthetic and, apparently, practical. The company moved back down to a single camera for the front (10 megapixel), using similar reasoning as Google’s single rear-facing camera on the Pixel: software has greatly improved what companies can do with a single lens. That’s certainly the case to a degree, and a strong case can be made for the selfie camera, which we generally require less of than the rear-facing array.
The company’s gone increasingly minimalist with the design language — something I appreciate. Over the years, as the smartphone has increasingly become a day to day utility, the product’s design has increasingly gotten out of its own way. The front and back are both made of a curved Gorilla Glass that butts up against a thin metal form with a total thickness of 7.9 millimeters.
On certain smooth surfaces like glass, you’ll occasionally find the device gliding slightly. I’d say the chances of dropping it are pretty decent with its frictionless design language, so you’re going to want to get a case for your $1,000 phone. Before you do, admire that color scheme on the back. There are four choices in all. Like the rest of the press, we ended up with Aura Glow.
It features a lovely, prismatic effect when light hits it. It’s proven a bit tricky to photograph, honestly. It’s also a fingerprint magnet, but these are the prices we pay to have the prettiest phone on the block.
One of the interesting footnotes here is how much the design of the 10 will be defined by what the device lost. There are two missing pieces here — both of which are a kind of concession from Samsung for different reasons. And for different reasons, both feel inevitable.
The headphone jack is, of course, the biggie. Samsung kicked and screamed on that one, holding onto the 3.5mm with dear life and roundly mocking the competition (read: Apple) at every turn. The company must have known it was a matter of time, even before the iPhone dropped the port three years ago.
Samsung glossed over the end of the jack (and apparently unlisted its Apple-mocking ads in the process) during the Note’s launch event. It was a stark contrast from a briefing we got around the device’s announcement, where the company’s reps spent significantly more time justifying the move. They know us well enough to know that we’d spend a little time taking the piss out of the company after three years of it making the once ubiquitous port a feature. All’s fair in love and port. And honestly, it was mostly just some good-natured ribbing. Welcome to the club, Samsung.
As for why Samsung did it now, the answer seems to be two-fold. The first is a kind of critical mass in Bluetooth headset usage. Allow me to quote myself from a few weeks back:
The tipping point, it says, came when its internal metrics showed that a majority of users on its flagship devices (the S and Note lines) moved to Bluetooth streaming. The company says the number is now in excess of 70% of users.
Also, as we’re all abundantly aware, the company put its big battery ambitions on hold for a bit, as it dealt with…more burning problems. A couple of recalls, a humble press release and an eight-point battery check later, and batteries are getting bigger again. There’s a 3,500mAh on the Note 10 and a 4,300mAh on the 10+. I’m happy to report that the latter got me through a full day plus three hours on a charge. Not bad, given all of the music and videos I subjected it to in that time.
There’s no USB-C dongle in-box. The rumors got that one wrong. You can pick up a Samsung-branded adapter for $15, or get one for much cheaper elsewhere. There is, however, a pair of AKG USB-C headphones in-box. I’ve said this before and I’ll say it again: Samsung doesn’t get enough credit for its free headphones. I’ve been known to use the pairs with other devices. They’re not the greatest the world, but they’re better sounding and more comfortable than what a lot of other companies offer in-box.
Obviously the standard no headphone jack things apply here. You can’t use the wired headphones and charge at the same time (unless you go wireless). You know the deal.
The other missing piece here is the Bixby button. I’m sure there are a handful of folks out there who will bemoan its loss, but that’s almost certainly a minority of the minority here. Since the button was first introduced, folks were asking for the ability to remap it. Samsung finally relented on that front, and with the Note 10, it drops the button altogether.
Thus far the smart assistant has been a disappointment. That’s due in no small part to a late launch compared to the likes of Siri, Alexa and Assistant, coupled with a general lack of capability at launch. In Samsung’s defense, the company’s been working to fix that with some pretty massive investment and a big push to court developers. There’s hope for Bixby yet, but a majority of users weren’t eager to have the assistant thrust upon them.
Instead, the power button has been shifted to the left of the device, just under the volume rocker. I preferred having it on the other side, especially for certain functions like screenshotting (something, granted, I do much more than the average user when reviewing a phone). That’s a pretty small quibble, of course.
Bixby can now be quickly accessed by holding down the power button. Handily, Samsung still lets you reassign the function there, if you really want Bixby out of your life. You can also hold down to get the power off menu or double press to launch Bixby or a third-party app (I opted for Spotify, probably my most used these days), though not a different assistant.
Imaging, meanwhile, is something Samsung’s been doing for a long time. The past several generations of S and Note devices have had great camera systems, and it continues to be the main point of improvement. It’s also one of few points of distinction between the 10 and 10+, aside from size.
The Note 10+ has four, count ’em, four rear-facing cameras. They are as follows:
Ultra Wide: 16 megapixel
Wide: 12 megapixel
Telephoto: 12 megapixel
That last one is only on the plus. It’s comprised of two little circles to the right of the primary camera array and just below the flash. We’ll get to that in a second.
The main camera array continues to be one of the best in mobile. The inclusion of telephoto and ultra-wide lenses allow for a wide range of different shots, and the hardware coupled with machine learning makes it a lot more difficult to take a bad photo (though believe me, it’s still possible).
The live focus feature (Portrait mode, essentially) comes to video, with four different filters, including Color Point, which makes everything but the subject black and white.
Samsung’s also brought a very simple video editor into the mix here, which is nice on the fly. You can edit the length of clips, splice in other clips, add subtitles and captions and add filters and music. It’s pretty beefy for something baked directly into the camera app, and one of the better uses I’ve found for the S Pen.
Note 10+ with Super Steady (left), iPhone XS (right)
Ditto for the improved Super Steady offering, which smooths out shaky video, including Hyperlapse mode, where handshakes are a big issue. It works well, but you do lose access to other features, including zoom. For that reason, it’s off by default and should be used relatively sparingly.
Note 10+ (left), iPhone XS (right)
Zoom-on Mic is a clever addition, as well. While shooting video, pinch-zooming on something will amplify the noise from that area. I’ve been playing around with it in this cafe. It’s interesting, but less than perfect.
Zooming into something doesn’t exactly cancel out ambient noise from outside of the frame. Everything still gets amplified in the process and, like digital picture zoom, a lot of noise gets added in the process. Those hoping for a kind of spy microphone, I’m sorry/happy to report that this definitely is not that.
The DepthVision Camera is also pretty limited as I write this. If anything, it’s Samsung’s attempt to brace for a future when things like augmented reality will (theoretically) play a much larger role in our mobile computing. In a conversation I had with the company ahead of launch, they suggested that a lot of the camera’s AR functions will fall in the hands of developers.
For now, Quick Measure is the one practical use. The app is a lot like Apple’s more simply titled Measure. Fire it up, move the camera around to get a lay of the land and it will measure nearby objects for you. An interesting showcase for AR potential? Sure. Earth shattering? Naw. It also seems to be a bit of a battery drain, sucking up the last few bits of juice as I was running it down.
3D Scanner, on the other hand, got by far the biggest applause line of the Note event. And, indeed, it’s impressive. In the stage demo, a Samsung employee scanned a stuffed pink beaver (I’m not making this up), created a 3D image and animated it using an associate’ movements. Practical? Not really. Cool? Definitely.
It was, however, not available at press time. Hopefully it proves to be more than vaporware, especially if that demo helped push some viewers over to the 10+. Without it, there’s just not a lot of use for the depth camera at the moment.
There’s also AR Doodle, which fills a similar spot as much of the company’s AR offerings. It’s kind of fun, but again, not particularly useful. You’ll likely end up playing with it for a few minutes and forget about it entirely. Such is life.
The feature is built into the camera app, using depth sensing to orient live drawings. With the stylus you can draw in space or doodle on people’s faces. It’s neat, the AR works okay and I was bored with it in about three minutes. Like Quick Measure, the feature is as much a proof of concept as anything. But that’s always been a part of Samsung’s kitchen-sink approach — some combination of useful and silly.
That said, points to Samsung for continuing to de-creepify AR Emojis. Those have moved firmly away from the uncanny valley into something more cartoony/adorable. Less ironic usage will surely follow.
Asked about the key differences between the S and Note lines, Samsung’s response was simple: the S Pen. Otherwise, the lines are relatively interchangeable.
Samsung’s return of the stylus didn’t catch on for handsets quite like the phablet form factor. They’ve made a pretty significant comeback for tablets, but the Note remains fairly singular when it comes to the S Pen. I’ve never been a big user myself, but those who like it swear by it. It’s one of those things like the ThinkPad pointing stick or BlackBerry scroll wheel.
Like the phone itself, the peripheral has been streamlined with a unibody design. Samsung also continues to add capabilities. It can be used to control music, advance slideshows and snap photos. None of that is likely to convince S Pen skeptics (I prefer using the buttons on the included headphones for music control, for example), but more versatility is generally a good thing.
If anything is going to convince people to pick up the S Pen this time out, it’s the improved handwriting recognition. That’s pretty impressive. It was even able to decipher my awful chicken scratch.
You get the same sort of bleeding-edge specs here you’ve come to expect from Samsung’s flagships. The 10+ gets you a baseline 256GB of storage (upgradable to 512), coupled with a beefy 12GB of RAM (the regular Note is a still good 8GB/256GB). The 5G version sports the same numbers and battery (likely making its total life a bit shorter per charge). That’s a shift from the S10, whose 5G version was specced out like crazy. Likely Samsung is bracing for 5G to become less of a novelty in the next year or so.
The new Note also benefits from other recent additions, like the in-display fingerprint reader and wireless power sharing. Both are nice additions, but neither is likely enough to warrant an immediate upgrade.
Once again, that’s not an indictment of Samsung, so much as a reflection of where we are in the life cycle of a mature smartphone industry. The Note 10+ is another good addition to one of the leading smartphone lines. It succeeds as both a productivity device (thanks to additions like DeX and added cross-platform functionality with Windows 10) and an everyday handset.
There’s not enough on-board to really recommend an upgrade from the Note 8 or 9 — especially at that $1,099 price. People are holding onto their devices for longer, and for good reason (as detailed above). But if you need a new phone, are looking for something big and flashy and are willing to splurge, the Note continues to be the one to beat.
Google publicly disclosed its acquisition of homework helper app Socratic in an announcement this week, detailing the added support for the company’s A.I. technology and its relaunch on iOS. The acquisition apparently flew under the radar — Google says it bought the app last year.
According to one founder’s LinkedIn update, that was in March 2018. Google hasn’t responded to requests for comment for more details about the deal, but we’ll update if that changes.
Initially, the app offered a Quora-like Q&A platform where students could ask questions which were answered by experts. By the time Socratic raised $6 million in Series A funding back in 2015, its community had grown to around 500,000 students. The company later evolved to focus less on connecting users and more on utility.
It included a feature to take a photo of a homework question in order to get instant explanations through the mobile app launched in 2015. This is similar to many other apps in the space, like Photomath, Mathway, DoYourMath, and others.
However, Socratic isn’t just a math helper — it can also tackle subjects like science, literature, social studies, and more.
That strategy, apparently, was to make Socratic a Google A.I.-powered product. According to Google’s blog post penned by Bhansali — now the Engineering Manager at Socratic — the updated iOS app uses A.I. technology to help users.
The new version of the iOS app still allows you to snap a photo to get answers, or you can speak your question.
For example, if a student takes a photo from a classroom handout or asks a question like “what’s the difference between distance and displacement?,” Socratic will return a top match, followed by explainers, a Q&A section, and even related YouTube videos and web links. It’s almost like a custom search engine just for your homework questions.
Google also says it has built and trained algorithms that can analyze the student’s question then identify the underlying concepts in order to point users to these resources. For students who need even more help, the app can break down the concepts into smaller, easy-to-understand lessons.
In addition, the app includes subject guides on over 1,000 higher education and high school topics, developed with help from educators. The study guides can help students prepare for tests or just better learn a particular concept.
“In building educational resources for teachers and students, we’ve spent a lot of time talking to them about challenges they face and how we can help,” writes Bhansali. “We’ve heard that students often get ‘stuck’ while studying. When they have questions in the classroom, a teacher can quickly clarify—but it’s frustrating for students who spend hours trying to find answers while studying on their own,” he says.
This is where Socratic will help.
That said, the acquisition could help Google in other ways, too. In addition to its primary focus as a homework helper, the acquisition could aid Google Assistant technology across platforms, as the virtual assistant could learn to answer more complex questions that Google’s Knowledge Graph didn’t already include.
The relaunched, A.I.-powered version of Socratic by Google arrived on Thursday on iOS, where it also discloses through the app update text the app is now owned by Google.
The Android version of the app will launch this fall.
Each month millions of Indians are coming online for the first time, making India the last great growth market for internet companies worldwide. But winning them presents its own challenges.
These users, most of whom live in small cities and villages in India, can’t speak English. Their interests and needs are different from those of their counterparts in large cities. When they come online, the world wide web that is predominantly focused on the English-speaking masses, suddenly seems tiny, Google executives acknowledged at a media conference last year. According to a KPMG-Google report (PDF) on Indian languages, there will be 536 million non-English speaking users using internet in India by 2021.
Many companies are increasingly adding support for more languages, and Silicon Valley giants such as Google are developing tools to populate the web with content in Indian languages.
But there is still room for others to participate. On Friday, a new startup announced it is also in the race. And it has already received the backing of Y Combinator (YC).
Lokal is a news app that wants to bring local news to hundreds of millions of users in India in their regional languages. The startup, which is currently available in the Telugu language, has already amassed more than two million users, Jani Pasha, co-founder of Lokal, told TechCrunch in an interview.
There are tens of thousands of publications in India and several news aggregators that showcase the top stories from the mainstream outlets. But very few today are focusing on local news and delivering it in a language that the masses can understand, Pasha said.
Lokal is building a network of stringers and freelance reporters who produce original reporting around the issues and current affairs of local towns and cities. The app is updated throughout the day with regional news and also includes an “information” stream that shows things like current price of vegetables, upcoming events and contact details for local doctors and police stations.
The platform has grown to cover 18 districts in South India and is slowly ramping up its operations to more corners of the country. The early signs show that people are increasingly finding Lokal useful. “In 11 of the 18 districts we cover, we already have a larger presence and reader base than other media houses,” Pasha said.
Before creating Lokal, Pasha and the other co-founder of the startup, Vipul Chaudhary, attempted to develop a news aggregator app. The app presented news events in a timeline, offering context around each development.
“We made the biggest mistake. We built the product for four to five months without ever consulting with the users. We quickly found that nobody was using it. We went back to the drawing board and started interviewing users to understand what they wanted. How they consumed news, and where they got their news from,” he said.
“One thing we learned was that most of these users in tier 2 and tier 3 India still heavily rely on newspapers. Newspapers still carry a lot of local news and they rely on stringers who produce these news pieces and source them to publications,” he added.
But newspapers have limited pages, and they are slow. So Pasha and the team tried to build a platform that addresses these two things.
Pasha tried to replicate it through distributing local news, sourced from stringers, on a WhatsApp group. “That one WhatsApp group quickly became one of many as more and more people kept joining us,” he recalls. And that led to the creation of Lokal.
Along the journey, the team found that classifieds, matrimonial ads and things like birthday wishes are still driving people to newspapers, so Lokal has brought those things to the platform.
Pasha said Lokal will expand to three more states in the coming months. It will also begin to experiment with monetization, though that is not the primary focus currently. “The plan is to eventually bring this to entire India,” he said.
A growing number of startups today are attempting to build solutions for what they call India 2 and India 3 — the users who don’t live in major cities, don’t speak English and are financially not as strong.
ShareChat, a social media platform that serves users in 15 regional languages — but not English — said recently it has raised $100 million in a round led by Twitter. The app serves more than 60 million users each month, a figure it wants to double in the next year.
Digital abuse may be many things: hacking the victim’s computer, using knowledge of passwords or personal date to impersonate them or interfere with their presence online, accessing photos to track their location, and so on. As with other forms of abuse, there are as many patterns as there are people who suffer from it.
But with something like emotional abuse, there are decades of studies and clinical approaches to address how to categorize and cope with it. Not so with newer phenomena like being hacked or stalked via social media. That means there’s little standard playbook for them, and both abused and those helping them are left scrambling for answers.
“Prior to this work, people were reporting that the abusers were very sophisticated hackers, and clients were receiving inconsistent advice. Some people were saying, ‘Throw your device out.’ Other people were saying, ‘Delete the app.’ But there wasn’t a clear understanding of how this abuse was happening and why it was happening,” explained Diana Freed, a doctoral student at Cornell Tech and co-author of a new paper about digital abuse.
“They were making their best efforts, but there was no uniform way to address this,” said co-author Sam Havron. “They were using Google to try to help clients with their abuse situations.”
Investigating this problem with the help of a National Science Foundation grant to examine the role of tech in domestic abuse, they and some professor collaborators at Cornell and NYU came up with a new approach.
There’s a standardized questionnaire to characterize the type of tech-based being experienced. It may not occur to someone who isn’t tech-savvy that their partner may know their passwords, or that there are social media settings they can use to prevent that partner from seeing their posts. This information and other data are added to a sort of digital presence diagram the team calls the “technograph” and which helps the victim visualize their technological assets and exposure.
The team also created a device they call the IPV Spyware Discovery, or ISDi. It’s basically spyware scanning software loaded on a device that can check the victim’s device without having to install anything. This is important because an abuser may have installed tracking software that would alert them if the victim is trying to remove it. Sound extreme? Not to people fighting a custody battle who can’t seem to escape the all-seeing eye of an abusive ex. And these spying tools are readily available for purchase.
“It’s consistent, it’s data-driven and it takes into account at each phase what the abuser will know if the client makes changes. This is giving people a more accurate way to make decisions and providing them with a comprehensive understanding of how things are happening,” explained Freed.
Even if the abuse can’t be instantly counteracted, it can be helpful simply to understand it and know that there are some steps that can be taken to help.