Google’s FeedBurner moves to a new infrastructure but loses its email subscription service

Google today announced that it is moving FeedBurner to a new infrastructure but also deprecating its email subscription service.

If you’re an internet user of a certain age, chances are you used Google’s FeedBurner to manage the RSS feeds of your personal blogs and early podcasts at some point. During the Web 2.0 era, it was the de facto standard for feed management and analytics, after all. Founded in 2004, with Dick Costolo as one of its co-founders (before he became Twitter’s CEO in 2010), it was acquired by Google in 2007.

Ever since, FeedBurner lingered in an odd kind of limbo. While Google had no qualms shutting down popular services like Google Reader in favor of its ill-fated social experiments like Google+, FeedBurner just kept burning feeds day in and day out, even as Google slowly deprecated some parts of the service, most notably its advertising integrations.

I don’t know that anybody spent a lot of time thinking about the service and RSS has slowly (and sadly) fallen into obscurity, yet the service was probably easy enough to maintain that Google kept it going. And despite everything, shutting it down would probably break enough tools for publishers to create quite an uproar. The TechCrunch RSS feed, to which you are surely subscribed in your desktop RSS reader, is http://feeds.feedburner.com/TechCrunch/, after all.

So here we are, 14 years later, and Google today announced that it is “making several upcoming changes to support the product’s next chapter.” It’s moving the service to a new, more stable infrastructure.

But in July, it is also shutting down some non-core features that don’t directly involve feed management, most importantly the FeedBurner email subscription service that allowed you to get emailed alerts when a feed updates. Feed owners will be able to download their email subscriber lists (and will be able to do so after July, too). With that, Blogger’s FollowByEmail widget will also be deprecated (and hey, did you start this day thinking you’d read about FeedBurner AND Blogger on TechCrunch without having to travel back to 2007?).

Google stresses that other core FeedBurner features will remain in place, but given the popularity of email newsletters, that’s a bit of an odd move.

JXL turns Jira into spreadsheets

Atlassian’s Jira is an extremely powerful issue tracking and project management tool, but it’s not the world’s most intuitive piece of software either. Spreadsheets, on the other hand, are pretty much the de facto standard for managing virtually anything in a business. It’s maybe no surprise then that there are already a couple of tools on the market that bring a spreadsheet-like view of your projects to Jira or connect it to services like Google Sheets.

The latest entrant in this field is JXL Spreadsheets for Jira (and specifically Jira Cloud), which was founded by two ex-Atlassian employees, Daniel Franz and Hannes Obweger. And in what has become a bit of a trend, Atlassian Ventures invested in JXL earlier this year.

Franz built the Good News news reader before joining Atlassian while his co-founder previously founded Radiant Minds Software, the makers of Jira Roadmaps (now Portfolio for Jira), which was acquired by Atlassian.

Image Credits: JXL

“Jira is so successful because it is awesome,” Franz told me. “It is so versatile. It’s highly customizable. I’ve seen people in my time who are doing anything and everything with it. Working with customers [at Atlassian] — at some point, you didn’t get surprised anymore, but what the people can do and track with JIRA is amazing. But no one would rock up and say, ‘hey, JIRA is very pleasant and easy to use.'”

As Franz noted, by default, Jira takes a very opinionated view of how people should use it. But that also means that users often end up exporting their issues to create reports and visualizations, for example. But if they make any changes to this data, it never flows back into Jira. No matter how you feel about spreadsheets, they do work for many people and are highly flexible. Even Atlassian would likely agree because the new Jira Work Management, which is currently in beta, comes with a spreadsheet-like view and Trello, too, recently went this way when it launched a major update earlier this year.

Image Credits: JXL

Over the course of its three-month beta, the JXL team saw how its users ended up building everything from cross-project portfolio management to sprint planning, backlog maintenance, timesheets and inventory management on top of its service. Indeed, Franz tells me that the team already has some large customers, with one of them having a 7,000-seat license.

Pricing for JXL seems quite reasonable, starting at $1/user/month for teams with up to 10 users. Larger teams get increasingly larger discounts, down to $0.45/user/month for licenses with over 5,000 seats. There is also a free trial.

One of the reasons the company can offer this kind of pricing is because it only needs a very simple backend. None of a customer’s data sits on JXL’s servers. Instead, it sits right on top of Jira’s APIs, which in turn also means that changes are synced back and forth in real time.

JXL is now available in the Atlassian Marketplace and the team is actively hiring as it looks to build out its product (and put its new funding to work).

1Password acquires SecretHub and launches new enterprise secrets management tool

1Password, the password management service that competes with the likes of LastPass and BitWarden, today announced a major push beyond the basics of password management and into the infrastructure secrets management space. To do so, the company has acquired secrets management service SecretHub and is now launching its new 1Password Secrets Automation service.

1Password did not disclose the price of the acquisition. According to CrunchBase, Netherlands-based SecretHub never raised any institutional funding ahead of today’s announcement.

For companies like 1Password, moving into the enterprise space, where managing corporate credentials, API tokens, keys and certificates for individual users and their increasingly complex infrastructure services, seems like a natural move. And with the combination of 1Password and its new Secrets Automation service, businesses can use a single tool that covers them from managing their employee’s passwords to handling infrastructure secrets. 1Password is currently in use by more then 80,000 businesses worldwide and a lot of these are surely potential users of its Secrets Automation service, too.

“Companies need to protect their infrastructure secrets as much if not more than their employees’ passwords,” said Jeff Shiner, CEO of 1Password. “With 1Password and Secrets Automation, there is a single source of truth to secure, manage and orchestrate all of your business secrets. We are the first company to bring both human and machine secrets together in a significant and easy-to-use way.”

In addition to the acquisition and new service, 1Password also today announced a new partnership with GitHub. “We’re partnering with 1Password because their cross-platform solution will make life easier for developers and security teams alike,” said Dana Lawson, VP of partner engineering and development at GitHub, the largest and most advanced development platform in the world. “With the upcoming GitHub and 1Password Secrets Automation integration, teams will be able to fully automate all of their infrastructure secrets, with full peace of mind that they are safe and secure.”

IonQ now supports IBM’s Qiskit quantum development kit

IonQ, the trapped ion quantum computing company that recently went public via a SPAC, today announced that it is integrating its quantum computing platform with the open-source Qiskit software development kit. This means Qiskit users can now bring their programs to IonQ’s platform without any major modifications to their code.

At first glance, that seems relatively unremarkable, but it’s worth noting that Qiskit was founded by IBM Research and is IBM’s default tool for working with its quantum computers. There is a healthy bit of competition between IBM and IonQ (and, to be fair, many others in this space), in part because both are betting on very different technologies at the core of their platforms. While IonQ is betting on trapped ions, which allows its machines able to run at room temperature, IBM’s technique requires its machine to be supercooled.

IonQ has now released a new provider library for Qiskit that is available as part of the Qiskit Partner repository on GitHub and via the Python Package Index.

“IonQ is excited to make our quantum computers and APIs easily accessible to the Qiskit community,” said IonQ CEO & President Peter Chapman. “Open source has already revolutionized traditional software development. With this integration, we’re bringing the world one step closer to the first generation of widely-applicable quantum applications.”

On the one hand, it’s hard not to look at this as IonQ needling IBM a bit, but it’s also an acknowledgment that Qiskit has become somewhat of a standard for developers who want to work with quantum computers. But putting these rivalries aside, we’re also in the early days of quantum computing and with no clear leader yet, anything that makes these various platforms more interoperable is a win for developers who want to dip their feet into writing for them.

Memic raises $96M for its robot-assisted surgery platform

Memic, a startup developing a robotic-assisted surgical platform that recently received marketing authorization from the U.S. Food and Drug Administration, today announced that it has closed a $96 million Series D funding round. The round was led by Peregrine Ventures and Ceros, with participation from OurCrowd and Accelmed. The company plans to use the new funding to commercialize its platform in the U.S. and expand its marketing and sales efforts outside of the U.S., too.

The company previously raised a total amount of $31.8 million, according to Crunchbase, including about $12.5 million raised through crowdsourcing platform OurCrowd.

Memic team photo

Image Credits: Memic

The Hominis, as the company calls its platform, has been authorized for use in “single site, natural orifice laparoscopic-assisted transvaginal benign surgical procedures including benign hysterectomy.” It’s worth noting that the robot doesn’t perform the surgery without human intervention. Instead, surgeons control the device — and its robotic arms — from a central console. The company notes that the instruments are meant to replicate the motions of the surgeon’s arms. And while it’s currently only authorized for this one specific type of procedure, Memic is looking at a wide range of other procedures where a system like this could be beneficial.

“The Hominis system represents a significant advancement in the growing multi-billion-dollar robotic surgery market. This financing positions us to accelerate our commercialization efforts and bring Hominis to both surgeons and patients in the months ahead,” said Dvir Cohen, co-founder and CEO of Memic.

It’s worth noting that there are a wide range of similar, computer-assisted surgical systems on the market already. Only last month, Asensus Surgical received FDA clearance for its laparoscopic platform to be used in general surgery, for example. Meanwhile, eye surgery robotics startup ForSight recently raised $10 million in seed funding for its platform.

Memic’s Hominis is the first robotic device approved for benign transvaginal procedures, though, and the company and its investors are surely betting on this being a first stepping stone to additional use cases over time.

“Given the broad potential of Hominis combined with a strong management team, we are proud to support Memic and execution of its bold vision,” said Eyal Lifschitz, managing general partner of Peregrine Ventures.

Esri brings its flagship ArcGIS platform to Kubernetes

Esri, the geographic information system (GIS), mapping and spatial analytics company, is hosting its (virtual) developer summit today. Unsurprisingly, it is making a couple of major announcements at the event that range from a new design system and improved JavaScript APIs to support for running ArcGIS Enterprise in containers on Kubernetes.

The Kubernetes project was a major undertaking for the company, Esri Product Managers Trevor Seaton and Philip Heede told me. Traditionally, like so many similar products, ArcGIS was architected to be installed on physical boxes, virtual machines or cloud-hosted VMs. And while it doesn’t really matter to end-users where the software runs, containerizing the application means that it is far easier for businesses to scale their systems up or down as needed.

Esri ArcGIS Enterprise on Kubernetes deployment

Esri ArcGIS Enterprise on Kubernetes deployment

“We have a lot of customers — especially some of the larger customers — that run very complex questions,” Seaton explained. “And sometimes it’s unpredictable. They might be responding to seasonal events or business events or economic events, and they need to understand not only what’s going on in the world, but also respond to their many users from outside the organization coming in and asking questions of the systems that they put in place using ArcGIS. And that unpredictable demand is one of the key benefits of Kubernetes.”

Deploying Esri ArcGIS Enterprise on Kubernetes

Deploying Esri ArcGIS Enterprise on Kubernetes

The team could have chosen to go the easy route and put a wrapper around its existing tools to containerize them and call it a day, but as Seaton noted, Esri used this opportunity to re-architect its tools and break it down into microservices.

“It’s taken us a while because we took three or four big applications that together make up [ArcGIS] Enterprise,” he said. “And we broke those apart into a much larger set of microservices. That allows us to containerize specific services and add a lot of high availability and resilience to the system without adding a lot of complexity for the administrators — in fact, we’re reducing the complexity as we do that and all of that gets installed in one single deployment script.”

While Kubernetes simplifies a lot of the management experience, a lot of companies that use ArcGIS aren’t yet familiar with it. And as Seaton and Heede noted, the company isn’t forcing anyone onto this platform. It will continue to support Windows and Linux just like before. Heede also stressed that it’s still unusual — especially in this industry — to see a complex, fully integrated system like ArcGIS being delivered in the form of microservices and multiple containers that its customers then run on their own infrastructure.

Image Credits: Esri

In addition to the Kubernetes announcement, Esri also today announced new JavaScript APIs that make it easier for developers to create applications that bring together Esri’s server-side technology and the scalability of doing much of the analysis on the client-side. Back in the day, Esri would support tools like Microsoft’s Silverlight and Adobe/Apache Flex for building rich web-based applications. “Now, we’re really focusing on a single web development technology and the toolset around that,” Esri product manager Julie Powell told me.

A bit later this month, Esri also plans to launch its new design system to make it easier and faster for developers to create clean and consistent user interfaces. This design system will launch April 22, but the company already provided a bit of a teaser today. As Powell noted, the challenge for Esri is that its design system has to help the company’s partners to put their own style and branding on top of the maps and data they get from the ArcGIS ecosystem.

 

Aporia raises $5M for its AI observability platform

Machine learning (ML) models are only as good as the data you feed them. That’s true during training, but also once a model is put in production. In the real world, the data itself can change as new events occur and even small changes to how databases and APIs report and store data could have implications on how the models react. Since ML models will simply give you wrong predictions and not throw an error, it’s imperative that businesses monitor their data pipelines for these systems.

That’s where tools like Aporia come in. The Tel Aviv-based company today announced that it has raised a $5 million seed round for its monitoring platform for ML models. The investors are Vertex Ventures and TLV Partners.

Image Credits: Aporia

Aporia co-founder and CEO Liran Hason, after five years with the Israel Defense Forces, previously worked on the data science team at Adallom, a security company that was acquired by Microsoft in 2015. After the sale, he joined venture firm Vertex Ventures before starting Aporia in late 2019. But it was during his time at Adallom where he first encountered the problems that Aporio is now trying to solve.

“I was responsible for the production architecture of the machine learning models,” he said of his time at the company. “So that’s actually where, for the first time, I got to experience the challenges of getting models to production and all the surprises that you get there.”

The idea behind Aporia, Hason explained, is to make it easier for enterprises to implement machine learning models and leverage the power of AI in a responsible manner.

“AI is a super powerful technology,” he said. “But unlike traditional software, it highly relies on the data. Another unique characteristic of AI, which is very interesting, is that when it fails, it fails silently. You get no exceptions, no errors. That becomes really, really tricky, especially when getting to production, because in training, the data scientists have full control of the data.”

But as Hason noted, a production system may depend on data from a third-party vendor and that vendor may one day change the data schema without telling anybody about it. At that point, a model — say for predicting whether a bank’s customer may default on a loan — can’t be trusted anymore, but it may take weeks or months before anybody notices.

Aporia constantly tracks the statistical behavior of the incoming data and when that drifts too far away from the training set, it will alert its users.

One thing that makes Aporia unique is that it gives its users an almost IFTTT or Zapier-like graphical tool for setting up the logic of these monitors. It comes pre-configured with more than 50 combinations of monitors and provides full visibility in how they work behind the scenes. That, in turn, allows businesses to fine-tune the behavior of these monitors for their own specific business case and model.

Initially, the team thought it could build generic monitoring solutions. But the team realized that this wouldn’t only be a very complex undertaking, but that the data scientists who build the models also know exactly how those models should work and what they need from a monitoring solution.

“Monitoring production workloads is a well-established software engineering practice, and it’s past time for machine learning to be monitored at the same level,” said Rona Segev, founding partner at  TLV Partners. “Aporia‘s team has strong production-engineering experience, which makes their solution stand out as simple, secure and robust.”

 

Okta launches a new free developer plan

At its Oktane21 conference, Okta, the popular authentication and identity platform, today announced a new — and free — developer edition that features fewer limitations and support for significantly more monthly active users than its current free plan.

The new ‘Okta Starter Developer Edition,’ as it’s called, allows developers to scale up to 15,000 monthly active users — up from only 1,000 on its existing free plan. In addition, the company is also launching enhanced documentation, a set of sample apps and new SDKs, which now cover languages and frameworks like Go, Java, JavaScript, Python, Vue.js, React Native and Spring Boot.

“Our overall philosophy isn’t, ‘we want to just provide […] a set of authentication and authorization services.’ The way we’re looking at this is, ‘hey, app developer, how do we provide you the foundation you need to get up and running quickly with authorization and authentication as one part of it,’ ” Diya Jolly, Okta’s chief product officer, told me. And she believes that Okta is in a unique position to do so, because it doesn’t only offer tools to manage authorization and access, but also systems for securing microservices and providing applications with access to privileged resources.

Image Credits: Okta

It’s also worth noting that, while the deal hasn’t closed yet, Okta’s intent to acquire Auth0 significantly extends its developer strategy, given Auth0’s developer-first approach.

As for the expanded free account, Jolly noted that the company found that developers wanted to be able to access more of the service’s features during their prototyping phases. That means the new free Developer Edition comes with support for multi-factor authentication, machine-to-machine tokens and B2B integrations, for example, in addition to expanded support for integrations into toolchains. As is so often the case with enterprise tools, the free edition doesn’t come with the usual enterprise support options and has lower rate limits than the paid plans.

Still, and Jolly acknowledged this, a small to medium-sized business may be able to build applications and take them into production based on this new free plan.

“15K [monthly active users] is is a lot, but if you look at our customer base, it’s about the right amount for the smaller business applications, the real SMBs, and that was the goal. In a developer motion, you want people to try out things and then upgrade. I think that’s the key. No developer is going to come and build with you if you don’t have a free offering that they can tinker around and play with.”

Image Credits: Okta

She noted that the company has spent a lot of time thinking about how to support developers through the application development lifecycle overall. That includes better CLI tools for developers who would rather bypass Okta’s web-based console, for example, and additional integrations with tools like Terraform, Kong and Heroku. “Today, [developers] have to stitch together identity and Okta into those experiences — or they use some other identity — we’ve pre-stitched all of this for them,” Jolly said.

The new Okta Starter Developer Edition, as well as the new documentation, sample applications and integrations, are now available at developer.okta.com.

Arm announces the next generation of its processor architecture

Arm today announced Armv9, the next generation of its chip architecture. Its predecessor, Armv8 launched a decade ago and while it has seen its fair share of changes and updates, the new architecture brings a number of major updates to the platform that warrant a shift in version numbers. Unsurprisingly, Armv9 builds on V8 and is backward compatible, but it specifically introduces new security, AI, signal processing and performance features.

Over the last five years, more than 100 billion Arm-based chips have shipped. But Arm believes that its partners will ship over 300 billion in the next decade. We will see the first ArmV9-based chips in devices later this year.

Ian Smythe, Arm’s VP of Marketing for its client business, told me that he believes this new architecture will change the way we do computing over the next decade. “We’re going to deliver more performance, we will improve the security capabilities […] and we will enhance the workload capabilities because of the shift that we see in compute that’s taking place,” he said. “The reason that we’ve taken these steps is to look at how we provide the best experience out there for handling the explosion of data and the need to process it and the need to move it and the need to protect it.”

That neatly sums up the core philosophy behind these updates. On the security side, ArmV9 will introduce Arm’s confidential compute architecture and the concept of Realms. These Realms enable developers to write applications where the data is shielded from the operating system and other apps on the device. Using Realms, a business application could shield sensitive data and code from the rest of the device, for example.

Image Credits: Arm

“What we’re doing with the Arm Confidential Compute Architecture is worrying about the fact that all of our computing is running on the computing infrastructure of operating systems and hypervisors,” Richard Grisenthwaite, the chief architect at Arm, told me. “That code is quite complex and therefore could be penetrated if things go wrong. And it’s in an incredibly trusted position, so we’re moving some of the workloads so that [they are] running on a vastly smaller piece of code. Only the Realm manager is the thing that’s actually capable of seeing your data while it’s in action. And that would be on the order of about a 10th of the size of a normal hypervisor and much smaller still than an operating system.”

As Grisenthwaite noted, it took Arm a few years to work out the details of this security architecture and ensure that it is robust enough — and during that time Spectre and Meltdown appeared, too, and set back some of Arm’s initial work because some of the solutions it was working on would’ve been vulnerable to similar attacks.

Image Credits: Arm

Unsurprisingly, another area the team focused on was enhancing the CPU’s AI capabilities. AI workloads are now ubiquitous. Arm had already done introduced its Scalable Vector Extension (SVE) a few years ago, but at the time, this was meant for high-performance computing solutions like the Arm-powered Fugaku supercomputer.

Now, Arm is introducing SVE2 to enable more AI and digital signal processing (DSP) capabilities. Those can be used for image processing workloads, as well as other IoT and smart home solutions, for example. There are, of course, dedicated AI chips on the market now, but Arm believes that the entire computing stack needs to be optimized for these workloads and that there are a lot of use cases where the CPU is the right choice for them, especially for smaller workloads.

“We regard machine learning as appearing in just about everything. It’s going to be done in GPUs, it’s going to be done in dedicated processors, neural processors, and also done in our CPUs. And it’s really important that we make all of these different components better at doing machine learning,” Grisenthwaite said.

As for raw performance, Arm believes its new architecture will allow chip manufacturers to gain more than 30% in compute power over the next two chip generations, both for mobile CPUs but also the kind of infrastructure CPUs that large cloud vendors like AWS now offer their users.

“Arm’s next-generation Armv9 architecture offers a substantial improvement in security and machine learning, the two areas that will be further emphasized in tomorrow’s mobile communications devices,” said Min Goo Kim, the executive vice president of SoC development at Samsung Electronics. “As we work together with Arm, we expect to see the new architecture usher in a wider range of innovations to the next generation of Samsung’s Exynos mobile processors.”

Google starts trialing its FLoC cookie alternative in Chrome

Google today announced that it is rolling out Federated Learning of Cohorts (FLoC), a crucial part of its Privacy Sandbox project for Chrome, as a developer origin trial.

FLoC is meant to be an alternative to the kind of cookies that advertising technology companies use today to track you across the web. Instead of a personally identifiable cookie, FLoC runs locally and analyzes your browsing behavior to group you into a cohort of like-minded people with similar interests (and doesn’t share your browsing history with Google). That cohort is specific enough to allow advertisers to do their thing and show you relevant ads, but without being so specific as to allow marketers to identify you personally.

This “interest-based advertising,” as Google likes to call it, allows you to hide within the crowd of users with similar interests. All the browser displays is a cohort ID and all your browsing history and other data stay locally.

Image Credits: Google / Getty Images

The trial will start in the U.S., Australia, Brazil, Canada, India, Indonesia, Japan, Mexico, New Zealand and the Philippines. Over time, Google plans to scale it globally. As we learned earlier this month, Google is not running any tests in Europe because of concerns around GDPR and other privacy regulations (in part, because it’s unclear whether FLoC IDs should be considered personal data under these regulations).

Users will be able to opt out from this origin trial, just like they will be able to do so with all other Privacy Sandbox trials.

Unsurprisingly, given how FLoC upends many of the existing online advertising systems in place, not everybody loves this idea. Advertisers obviously love the idea of being able to target individual users, though Google’s preliminary data shows that using these cohorts leads to similar results for them and that advertisers can expect to see “at least 95% of the conversions per dollar spent when compared to cookie-based advertising.”

Google notes that its own advertising products will get the same access to FLoC IDs as its competitors in the ads ecosystem.

But it’s not just the advertising industry that is eyeing this project skeptically. Privacy advocates aren’t fully sold on the idea either. The EFF, for example, argues that FLoC will make it easier for marketing companies that want to fingerprint users based on the various FLoC IDs they expose, for example. That’s something Google is addressing with its Privacy Budget proposal, but how well that will work remains to be seen.

Meanwhile, users would probably prefer to just browse the web without seeing ads (no matter what the advertising industry may want us to believe) and without having to worry about their privacy. But online publishers continue to rely on advertising income to fund their sites.

With all of these divergent interests, it was always clear that Google’s initiatives weren’t going to please everyone. That friction was always built into the process. And while other browser vendors can outright block ads and third-party cookies, Google’s role in the advertising ecosystem makes this a bit more complicated.

“When other browsers started blocking third-party cookies by default, we were excited about the direction, but worried about the immediate impact,” Marshall Vale, Google’s product manager for Privacy Sandbox, writes in today’s announcement. “Excited because we absolutely need a more private web, and we know third-party cookies aren’t the long-term answer. Worried because today many publishers rely on cookie-based advertising to support their content efforts, and we had seen that cookie blocking was already spawning privacy-invasive workarounds (such as fingerprinting) that were even worse for user privacy. Overall, we felt that blocking third-party cookies outright without viable alternatives for the ecosystem was irresponsible, and even harmful, to the free and open web we all enjoy.”

It’s worth noting that FLoC, as well as Google’s other privacy sandbox initiatives, are still under development. The company says the idea here is to learn from these initial trials and evolve the project accordingly.