Zoom to start first phase of E2E encryption rollout next week

Zoom will begin rolling out end-to-end encryption to users of its videoconferencing platform from next week, it said today.

The platform, whose fortunes have been supercharged by the pandemic-driven boom in remote working and socializing this year, has been working on rebooting its battered reputation in the areas of security and privacy since April — after it was called out on misleading marketing claims of having E2E encryption (when it did not). E2E is now finally on its way though.

“We’re excited to announce that starting next week, Zoom’s end-to-end encryption (E2EE) offering will be available as a technical preview, which means we’re proactively soliciting feedback from users for the first 30 days,” it writes in a blog post. “Zoom users — free and paid — around the world can host up to 200 participants in an E2EE meeting on Zoom, providing increased privacy and security for your Zoom sessions.”

Zoom acquired Keybase in May, saying then that it was aiming to develop “the most broadly used enterprise end-to-end encryption offering”.

However, initially, CEO Eric Yuan said this level of encryption would be reserved for fee-paying users only. But after facing a storm of criticism the company enacted a swift U-turn — saying in June that all users would be provided with the highest level of security, regardless of whether they are paying to use its service or not.

Zoom confirmed today that Free/Basics users who want to get access to E2EE will need to participate in a one-time verification process — in which it will ask them to provide additional pieces of information, such as verifying a phone number via text message — saying it’s implementing this to try to reduce “mass creation of abusive accounts”.

“We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our work with human rights and children’s safety organizations and our users’ ability to lock down a meeting, report abuse, and a myriad of other features made available as part of our security icon — we can continue to enhance the safety of our users,” it writes.

Next week’s roll out of a technical preview is phase 1 of a four-stage process to bring E2E encryption to the platform.

This means there are some limitations — including on the features that are available in E2EE Zoom meetings (you won’t have access to join before host, cloud recording, streaming, live transcription, Breakout Rooms, polling, 1:1 private chat, and meeting reactions); and on the clients that can be used to join meetings (for phase 1 all E2EE meeting participants must join from the Zoom desktop client, mobile app, or Zoom Rooms). 

The next phase of the E2EE rollout — which will include “better identity management and E2EE SSO integration”, per Zoom’s blog — is “tentatively” slated for 2021.

From next week, customers wanting to check out the technical preview must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis.

All meeting participants must have the E2EE setting enabled in order to join an E2EE meeting. Hosts can enable the setting for E2EE at the account, group, and user level and can be locked at the account or group level, Zoom notes in an FAQ.

The AES 256-bit GCM encryption that’s being used is the same as Zoom currently uses but here combined with public key cryptography — which means the keys are generated locally, by the meeting host, before being distributed to participants, rather than Zoom’s cloud performing the key generating role.

“Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents,” it explains of the E2EE implementation.

If you’re wondering how you can be sure you’ve joined an E2EE Zoom meeting a dark padlock will be displayed atop the green shield icon in the upper left corner of the meeting screen. (Zoom’s standard GCM encryption shows a checkmark here.)

Meeting participants will also see the meeting leader’s security code — which they can use to verify the connection is secure. “The host can read this code out loud, and all participants can check that their clients display the same code,” Zoom notes.

Atlassian Smarts adds machine learning layer across the company’s platform of services

Atlassian has been offering collaboration tools, often favored by developers and IT for some time with such stalwarts as Jira for help desk tickets, Confluence to organize your work and BitBucket to organize your development deliverables, but what it lacked was machine learning layer across the platform to help users work smarter within and across the applications in the Atlassian family.

That changed today, when Atlassian announced it has been building that machine learning layer called Atlassian Smarts, and is releasing several tools that take advantage of it. It’s worth noting that unlike Salesforce, which calls its intelligence layer Einstein or Adobe, which calls its Sensei; Atlassian chose to forgo the cutesy marketing terms and just let the technology stand on its own.

Shihab Hamid, the founder of the Smarts and Machine Learning Team at Atlassian, who has been with the company 14 years, says that they avoided a marketing name by design. “I think one of the things that we’re trying to focus on is actually the user experience and so rather than packaging or branding the technology, we’re really about optimizing teamwork,” Hamid told TechCrunch.

Hamid says that the goal of the machine learning layer is to remove the complexity involved with organizing people and information across the platform.

“Simple tasks like finding the right person or the right document becomes a challenge, or at least they slow down productivity and take time away from the creative high-value work that everyone wants to be doing, and teamwork itself is super messy and collaboration is complicated. These are human challenges that don’t really have one right solution,” he said.

He says that Atlassian has decided to solve these problems using machine learning with the goal of speeding up repetitive, time-intensive tasks. Much like Adobe or Salesforce, Atlassian has built this underlying layer of machine smarts, for lack of a better term, that can be distributed across their platform to deliver this kind of machine learning-based functionality wherever it makes sense for the particular product or service.

“We’ve invested in building this functionality directly into the Atlassian platform to bring together IT and development teams to unify work, so the Atlassian flagship products like JIRA and Confluence sit on top of this common platform and benefit from that common functionality across products. And so the idea is if we can build that common predictive capability at the platform layer we can actually proliferate smarts and benefit from the data that we gather across our products,” Hamid said.

The first pieces fit into this vision. For starters, Atlassian is offering a smart search tool that helps users find content across Atlassian tools faster by understanding who you are and how you work. “So by knowing where users work and what they work on, we’re able to proactively provide access to the right documents and accelerate work,” he said.

The second piece is more about collaboration and building teams with the best personnel for a given task. A new tool called predictive user mentions helps Jira and Confluence users find the right people for the job.

“What we’ve done with the Atlassian platform is actually baked in that intelligence, because we know what you work on and who you collaborate with, so we can predict who should be involved and brought into the conversation,” Hamid explained.

Finally, the company announced a tool specifically for Jira users, which bundles together similar sets of help requests and that should lead to faster resolution over doing them manually one at a time.

“We’re soon launching a feature in JIRA Service Desk that allows users to cluster similar tickets together, and operate on them to accelerate IT workflows, and this is done in the background using ML techniques to calculate the similarity of tickets, based on the summary and description, and so on.”

All of this was made possible by the company’s previous shift  from mostly on-premises to the cloud and the flexibility that gave them to build new tooling that crosses the entire platform.

Today’s announcements are just the start of what Atlassian hopes will be a slew of new machine learning-fueled features being added to the platform in the coming months and years.

Twilio’s $3.2B Segment acquisition is about helping developers build data-fueled apps

The pandemic has forced businesses to change the way they interact with customers. Whether it’s how they deliver goods and services, or how they communicate, there is one common denominator, and that’s that everything is being forced to be digitally driven much faster.

To some extent, that’s what drove Twilio to acquire Segment for $3.2 billion today. (We wrote about the deal over the weekend. Forbes broke the story last Friday night.) When you get down to it, the two companies fit together well, and expand the platform by giving Twilio customers access to valuable customer data. Chee Chew, Twilio’s chief product officer, says while it may feel like the company is pivoting in the direction of customer experience, they don’t necessarily see it that way.

“A lot of people have thought about us as a communications company, but we think of ourselves as a customer engagement company. We really think about how we help businesses communicate more effectively with their customers,” Chew told TechCrunch.

Laurie McCabe, co-founder and partner at SMB Group, sees the move related to the pandemic and the need companies have to serve customers in a more fully digital way. “More customers are realizing that delivering a great customer experience is key to survive through the pandemic, and thriving as the economy recovers — and are willing to spend to do this even in uncertain times,” McCabe said.

Certainly Chew recognized that Segment gives them something they were lacking by providing developers with direct access to customer data, and that could lead to some interesting applications.

“The data capabilities that Segment has are providing a full view of the customer. It really layers across everything we do. I think of it as a horizontal add across the channels and extending beyond. So I think it really helps us advance in a different sort of way […] towards getting the holistic view of the customer and enabling our customers to build intelligence services on top,” he said.

Brent Leary, founder and principal analyst at CRM Essentials, sees Segment helping to provide a powerful data-fueled developer experience. “This move allows Twilio to impact the data-insight-interaction-experience transformation process by removing friction from developers using their platform,” Leary explained. In other words, it gives developers that ability that Chew alluded to, to use data to build more varied applications using Twilio APIs.

Paul Greenberg, author of CRM at the Speed of Light, and founder and principal analyst at 56 Group, agrees, saying, “Segment gives Twilio the ability to use customer data in what is already a powerful unified communications platform and hub. And since it is, in effect, APIs for both, the flexibility [for developers] is enormous,” he said.

That may be so, but Holger Mueller, an analyst at Constellation Research, says the company has to be seeing that the pure communication parts of the platform like SMS are becoming increasingly commoditized, and this deal, along with the SendGrid acquisition in 2018, gives Twilio a place to expand its platform into a much more lucrative data space.

“Twilio needs more growth path and it looks like its strategy is moving up the stack, at least with the acquisition of Segment. Data movement and data residence compliance is a huge headache for enterprises when they build their next generation applications,” Mueller said.

As Chew said, early on the problems were related to building SMS messages into applications and that was the problem that Twilio was trying to solve because that’s what developers needed at the time, but as it moves forward, it wants to provide a more unified customer communications experience, and Segment should help advance that capability in a big way for them.

Twilio is buying customer data startup Segment for between $3B and $4B

Sources have told TechCrunch that Twilio intends to acquire customer data startup Segment for between $3 and $4 billion. Forbes broke the story on Friday night, reporting a price tag of $3.2 billion.

We have heard from a couple of industry sources that the deal is in the works and could be announced as early as Monday.

Twilio and Segment are both API companies. That means they create an easy way for developers to tap into a specific type of functionality without writing a lot of code. As I wrote in a 2017 article on Segment, it provides a set of APIs to pull together customer data from a variety of sources:

Segment has made a name for itself by providing a set of APIs that enable it to gather data about a customer from a variety of sources like your CRM tool, customer service application and website and pull that all together into a single view of the customer, something that is the goal of every company in the customer information business.

While Twilio’s main focus since it launched in 2008 has been on making it easy to embed communications functionality into any app, it signaled a switch in direction when it released the Flex customer service API in March 2018. Later that same year, it bought SendGrid, an email marketing API company for $2 billion.

Twilio’s market cap as of Friday was an impressive $45 billion. You could see how it can afford to flex its financial muscles to combine Twilio’s core API mission, especially Flex, with the ability to pull customer data with Segment and create customized email or ads with SendGrid.

This could enable Twilio to expand beyond pure core communications capabilities and it could come at the cost of around $5 billion for the two companies, a good deal for what could turn out to be a substantial business as more and more companies look for ways to understand and communicate with their customers in more relevant ways across multiple channels.

As Semil Shah from early stage VC firm Haystack wrote in the company blog yesterday, Segment saw a different way to gather customer data, and Twilio was wise to swoop in and buy it.

Segment’s belief was that a traditional CRM wasn’t robust enough for the enterprise to properly manage its pipe. Segment entered to provide customer data infrastructure to offer a more unified experience. Now under the Twilio umbrella, Segment can continue to build key integrations (like they have for Twilio data), which is being used globally inside Fortune 500 companies already.

Segment was founded in 2011 and raised over $283 million, according to Crunchbase data. Its most recent raise was $175 million in April on a $1.5 billion valuation.

Twilio stock closed at $306.24 per share on Friday up $2.39%.

Segment declined to comment on this story. We also sent a request for comment to Twilio, but hadn’t heard back by the time we published.  If that changes, we will update the story.

How Roblox completely transformed its tech stack

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem

Headroom, which uses AI to supercharge videoconferencing, raises $5M

Videoconferencing has become a cornerstone of how many of us work these days — so much so that one leading service, Zoom, has graduated into verb status because of how much it’s getting used.

But does that mean videoconferencing works as well as it should? Today, a new startup called Headroom is coming out of stealth, tapping into a battery of AI tools — computer vision, natural language processing and more — on the belief that the answer to that question is a clear — no bad WiFi interruption here — “no.”

Headroom not only hosts videoconferences, but then provides transcripts, summaries with highlights, gesture recognition, optimised video quality, and more, and today it’s announcing that it has raised a seed round of $5 million as it gears up to launch its freemium service into the world.

You can sign up to the waitlist to pilot it, and get other updates here.

The funding is coming from Anna Patterson of Gradient Ventures (Google’s AI venture fund); Evan Nisselson of LDV Capital (a specialist VC backing companies buidling visual technologies); Yahoo founder Jerry Yang, now of AME Cloud Ventures; Ash Patel of Morado Ventures; Anthony Goldbloom, the cofounder and CEO of Kaggle.com; and Serge Belongie, Cornell Tech associate dean and Professor of Computer Vision and Machine Learning.

It’s an interesting group of backers, but that might be because the founders themselves have a pretty illustrious background with years of experience using some of the most cutting-edge visual technologies to build other consumer and enterprise services.

Julian Green — a British transplant — was most recently at Google, where he ran the company’s computer vision products, including the Cloud Vision API that was launched under his watch. He came to Google by way of its acquisition of his previous startup Jetpac, which used deep learning and other AI tools to analyze photos to make travel recommendations. In a previous life, he was one of the co-founders of Houzz, another kind of platform that hinges on visual interactivity.

Russian-born Andrew Rabinovich, meanwhile, spent the last five years at Magic Leap, where he was the head of AI, and before that, the director of deep learning and the head of engineering. Before that, he too was at Google, as a software engineer specializing in computer vision and machine learning.

You might think that leaving their jobs to build an improved videoconferencing service was an opportunistic move, given the huge surge of use that the medium has had this year. Green, however, tells me that they came up with the idea and started building it at the end of 2019, when the term “Covid-19” didn’t even exist.

“But it certainly has made this a more interesting area,” he quipped, adding that it did make raising money significantly easier, too. (The round closed in July, he said.)

Given that Magic Leap had long been in limbo — AR and VR have proven to be incredibly tough to build businesses around, especially in the short- to medium-term, even for a startup with hundreds of millions of dollars in VC backing — and could have probably used some more interesting ideas to pivot to; and that Google is Google, with everything tech having an endpoint in Mountain View, it’s also curious that the pair decided to strike out on their own to build Headroom rather than pitch building the tech at their respective previous employers.

Green said the reasons were two-fold. The first has to do with the efficiency of building something when you are small. “I enjoy moving at startup speed,” he said.

And the second has to do with the challenges of building things on legacy platforms versus fresh, from the ground up.

“Google can do anything it wants,” he replied when I asked why he didn’t think of bringing these ideas to the team working on Meet (or Hangouts if you’re a non-business user). “But to run real-time AI on video conferencing, you need to build for that from the start. We started with that assumption,” he said.

The first iteration of the product will include features that will automatically take transcripts of the whole conversation, with the ability to use the video replay to edit the transcript if something has gone awry; offer a summary of the key points that are made during the call; and identify gestures to help shift the conversation.

And Green tells me that they are already also working on features that will be added into future iterations. When the videoconference uses supplementary presentation materials, those can also be processed by the engine for highlights and transcription too.

And another feature will optimize the pixels that you see for much better video quality, which should come in especially handy when you or the person/people you are talking to are on poor connections.

“You can understand where and what the pixels are in a video conference and send the right ones,” he explained. “Most of what you see of me and my background is not changing, so those don’t need to be sent all the time.”

All of this taps into some of the more interesting aspects of sophisticated computer vision and natural language algorithms. Creating a summary, for example, relies on technology that is able to suss out not just what you are saying, but what are the most important parts of what you or someone else is saying.

And if you’ve ever been on a videocall and found it hard to make it clear you’ve wanted to say something, without straight-out interrupting the speaker, you’ll understand why gestures might be very useful.

But they can also come in handy if a speaker wants to know if he or she is losing the attention of the audience: the same tech that Headroom is using to detect gestures for people keen to speak up can also be used to detect when they are getting bored or annoyed and pass that information on to the person doing the talking.

“It’s about helping with EQ,” he said, with what I’m sure was a little bit of his tongue in his cheek, but then again we were on a Google Meet, and I may have misread that.

And that brings us to why Headroom is tapping into an interesting opportunity. At their best, when they work, tools like these not only supercharge videoconferences, but they have the potential to solve some of the problems you may have come up against in face-to-face meetings, too. Building software that actually might be better than the “real thing” is one way of making sure that it can have staying power beyond the demands of our current circumstances (which hopefully won’t be permanent circumstances).

As IBM spins out legacy infrastructure management biz, CEO goes all in on the cloud

When IBM announced this morning that it was spinning out its legacy infrastructure services business, it was a clear signal that new CEO Arvand Krishna, who took the reins in April, was ready to fully commit his company to the cloud.

The move was a continuation of the strategy the company began to put in place when it bought Red Hat in 2018 for the princely sum of $34 billion. That purchase signaled a shift to a hybrid-cloud vision, where some of your infrastructure lives on-premises and some in the cloud — with Red Hat helping to manage it all.

Even as IBM moved deeper into the hybrid cloud strategy, Krishna saw the financial results like everyone else and recognized the need to focus more keenly on that approach. In its most recent earnings report overall IBM revenue was $18.1 billion, down 5.4% compared to the year-ago period. But if you broke out just IBM’s cloud and Red Hat revenue, you saw some more promising results: cloud revenue was up 30 percent to $6.3 billion, while Red Hat-derived revenue was up 17%.

Even more, cloud revenue for the trailing 12 months was $23.5 billion, up 20%.

You don’t need to be a financial genius to see where the company is headed. Krishna clearly saw that it was time to start moving on from the legacy side of IBM’s business, even if there would be some short-term pain involved in doing so. So the executive put his resources into (as they say) where the puck is going. Today’s news is a continuation of that effort.

The managed infrastructure services segment of IBM is a substantial business in its own right, but Krishna was promoted to CEO to clean house, taking over from Ginni Rometti to make hard decisions like this.

While its cloud business is growing, Synergy Research data has IBM public cloud market share mired in single digits with perhaps 4 or 5%. In fact, Alibaba has passed its market share, though both are small compared to the market leaders Amazon, Microsoft and Google.

Like Oracle, another legacy company trying to shift more to the cloud infrastructure business, IBM has a ways to go in its cloud evolution.

As with Oracle, IBM has been chasing the market leaders — Google at 9%, Microsoft 18% and AWS with 33% share of public cloud revenue (according to Synergy) — for years now without much change in its market share. What’s more, IBM competes directly with Microsoft and Google, which are also going after that hybrid cloud business with more success.

While IBM’s cloud revenue is growing, its market share needle is stuck and Krishna understands the need to focus. So, rather than continue to pour resources into the legacy side of IBM’s business, he has decided to spin out that part of the company, allowing more attention for the favored child, the hybrid cloud business.

It’s a sound strategy on paper, but it remains to be seen if it will have a material impact on IBM’s growth profile in the long run. He is betting that it will, but then what choice does he have?

Grid AI raises $18.6M Series A to help AI researchers and engineers bring their models to production

Grid AI, a startup founded by the inventor of the popular open-source PyTorch Lightning project, William Falcon, that aims to help machine learning engineers more efficiently, today announced that it has raised an $18.6 million Series A funding round, which closed earlier this summer. The round was led by Index Ventures, with participation from Bain Capital Ventures and firstminute. 

Falcon co-founded the company with Luis Capelo, who was previously the head of machine learning at Glossier. Unsurprisingly, the idea here is to take PyTorch Lightning, which launched about a year ago, and turn that into the core of Grid’s service. The main idea behind Lightning is to decouple the data science from the engineering.

The time argues that a few years ago, when data scientists tried to get started with deep learning, they didn’t always have the right expertise and it was hard for them to get everything right.

“Now the industry has an unhealthy aversion to deep learning because of this,” Falcon noted. “Lightning and Grid embed all those tricks into the workflow so you no longer need to be a PhD in AI nor [have] the resources of the major AI companies to get these things to work. This makes the opportunity cost of putting a simple model against a sophisticated neural network a few hours’ worth of effort instead of the months it used to take. When you use Lightning and Grid it’s hard to make mistakes. It’s like if you take a bad photo with your phone but we are the phone and make that photo look super professional AND teach you how to get there on your own.”

As Falcon noted, Grid is meant to help data scientists and other ML professionals “scale to match the workloads required for enterprise use cases.” Lightning itself can get them partially there, but Grid is meant to provide all of the services its users need to scale up their models to solve real-world problems.

What exactly that looks like isn’t quite clear yet, though. “Imagine you can find any GitHub repository out there. You get a local copy on your laptop and without making any code changes you spin up 400 GPUs on AWS — all from your laptop using either a web app or command-line-interface. That’s the Lightning “magic” applied to training and building models at scale,” Falcon said. “It is what we are already known for and has proven to be such a successful paradigm shift that all the other frameworks like Keras or TensorFlow, and companies have taken notice and have started to modify what they do to try to match what we do.”

The service is now in private beta.

With this new funding, Grid, which currently has 25 employees, plans to expand its team and strengthen its corporate offering via both Grid AI and through the open-source project. Falcon tells me that he aims to build a diverse team, not in the least because he himself is an immigrant, born in Venezuela, and a U.S. military veteran.

“I have first-hand knowledge of the extent that unethical AI can have,” he said. “As a result, we have approached hiring our current 25 employees across many backgrounds and experiences. We might be the first AI company that is not all the same Silicon Valley prototype tech-bro.”

“Lightning’s open-source traction piqued my interest when I first learned about it a year ago,” Index Ventures’ Sarah Cannon told me. “So intrigued in fact I remember rushing into a closet in Helsinki while at a conference to have the privacy needed to hear exactly what Will and Luis had built. I promptly called my colleague Bryan Offutt who met Will and Luis in SF and was impressed by the ‘elegance’ of their code. We swiftly decided to participate in their seed round, days later. We feel very privileged to be part of Grid’s journey. After investing in seed, we spent a significant amount with the team, and the more time we spent with them the more conviction we developed. Less than a year later and pre-launch, we knew we wanted to lead their Series A.”

YC grad DigitalBrain snags $3.4M seed to streamline customer service tasks

Most startup founders have a tough road to their first round of funding, but the founders of Digital Brain had it a bit tougher than most. The two young founders survived by entering and winning hackathons to pay their rent and put on food on the table. One of the ideas they came up with at those hackathons was DigitalBrain, a layer that sits on top of customer service software like Zendesk to streamline tasks and ease the job of customer service agents.

They ended up in Y Combinator in the Summer 2020 class, and today the company announced a $3.4 million seed investment. This total includes $3 million raised this round, which closed in August, and previously unannounced investments of $250,000 in March from Unshackled Ventures and $150,000 from Y Combinator in May.

The round was led by Moxxie Ventures with help from Caffeinated Capital, Unshackled Ventures, Shrug Capital, Weekend Fund, Underscore VC and Scribble Ventures along with a slew of individual investors.

Company co-founder Kesava Kirupa Dinakaran says that after he and his partner Dmitry Dolgopolov met at hackathon in May 2019, they moved into a community house in San Francisco full of startup founders. They kept hearing from their housemates about the issues their companies faced with customer service as they began scaling. Like any good entrepreneur, they decided to build something to solve that problem.

“DigitalBrain is an external layer that sits on top of existing help desk software to actually help the support agents get through their tickets twice as fast, and we’re doing that by automating a lot of internal workflows, and giving them all the context and information they need to respond to each ticket making the experience of responding to these tickets significantly faster,” Dinakaran told TechCrunch.

What this means in practice is that customer service reps work in DigitalBrain to process their tickets, and as they come upon a problem such as canceling an order or reporting a bug, instead of traversing several systems to fix it, they chose the appropriate action in DigitalBrain, enter the required information, and the problem is resolved for them automatically.  In the case of a bug, it would file a Jira ticket with engineering. In the case of canceling an order, it would take all of the actions and update all of the records required by this request.

As Dinakaran points out they aren’t typical Silicon Valley startup founders. They are 20 year old immigrants from India and Russia respectively, who came to the U.S. with coding skills and a dream of building a company. “We are both outsiders to Silicon Valley. We didn’t go to college. We don’t come from families of means. We wanted to come here and build our initial network from ground up,” he said.

Eventually they met some folks through their housemates, who suggested that they apply to Y Combinator. “As we started to meet people that we met through our community house here, some of them were YC founders and they kept saying I think you guys will love the YC community, not just in terms of your ethos, but also just purely from a perspective of meeting new people and where you are,” he said.

He said while he and his co-founder have trouble wrapping their arms around a number like the amount they have in the bank now, considering it wasn’t that long ago that they struggling to meet expenses every month, they recognize this money buys them an opportunity to help start building a more substantial company.

“What we’re trying to do is really accelerate the development and building of what we’re doing. And we think if we push the gas pedal with the resources we’ve gotten, we’ll be able to accelerate bringing on the next couple of customers, and start onboarding some of the larger companies we’re interested in,” he said.

Daily Crunch: G Suite becomes Google Workspace

Google rebrands G Suite, Apple announces its next event date and John McAfee is arrested. This is your Daily Crunch for October 6, 2020.

The big story: G Suite becomes Google Workspace

To a large extent, Google Workspace is just a rebranding of G Suite, complete with a new set of (less distinctive) logos for Gmail, Calendar, Drive, Docs and Meet. But the company is also launching a number of new features.

For one thing, Google is (as previously announced) integrating Meet, Chat and Rooms across applications, with Gmail as the service where they really come together. Other features coming soon are the ability to collaborate on documents in Chats and a “smart chip” with contact details and suggested actions that appear when you @mention someone in a document.

Pricing remains largely the same, although there’s now an $18 per user per month Business Plus plan with additional security features and compliance tools.

The tech giants

Apple will announce the next iPhone on October 13 — Apple just sent out invites for its upcoming hardware event, all but confirming the arrival of the next iPhone.

Facebook’s Portal adds support for Netflix, Zoom and other features — The company will also introduce easier ways to launch Netflix and other video streaming apps via one-touch buttons on its new remote.

Instagram’s 10th birthday release introduces a Stories Map, custom icons and more — There’s even a selection of custom app icons for those who have recently been inspired to redesign their home screen.

Startups, funding and venture capital

SpaceX awarded contract to help develop US missile-tracking satellite network — The contract covers creation and delivery of “space vehicles” (actual satellites) that will form a constellation offering global coverage of advance missile warning and tracking.

Salesforce Ventures launches $100M Impact Fund to invest in cloud startups with social mission — Focus areas include education and reskilling, climate action, diversity, equity and inclusion, as well as providing tech for nonprofits and foundations.

Ÿnsect, the makers of the world’s most expensive bug farm, raises another $224 million — The team hopes to provide insect protein for things like fish food and fertilizer.

Advice and analysis from Extra Crunch

Inside Root’s IPO filing — As insurtech booms, Root looks to take advantage of a warm market and enthusiastic investors.

To fill funding gaps, VCs boost efforts to find India’s standout early-stage startups — Blume Ventures’ Karthik Reddy says, “There’s an artificial skew toward unicorns.”

A quick peek into Opendoor’s financial results — Opendoor’s 2020 results are not stellar.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

John McAfee arrested after DOJ indicts crypto millionaire for tax evasion — The cybersecurity entrepreneur and crypto personality’s wild ride could be coming to an end after he was arrested in Spain and now faces extradition to the U.S.

Trump is already breaking platform rules again with false claim that COVID-19 is ‘far less lethal’ than the flu — Facebook took down Trump’s post, while Twitter hid it behind a warning.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.