The number of Alexa skills in the U.S. more than doubled in 2018

Amazon Alexa had a good year as a developer platform – at least in terms of the number of voice apps being built for Alexa, if not yet the monetization of those apps. According to new data published today by Voicebot, the number of Amazon Alexa skills in the U.S. more than doubled over 2018, while the number of skills grew by 233 percent and 152 percent in Alexa’s two other top markets, the U.K. and Germany, respectively.

Amazon began the year with 25,784 Alexa skills in the U.S., which grew to 56,750 skills by the end of 2018, said Voicebot. That represents 120 percent growth, which is down from the 266 percent growth seen the year prior – but still shows continued developer interest in the Alexa platform.

At this rate of growth, that means developers were publishing an average of around 85 skills per day in 2018.

Voicebot has its own method for tracking skill counts, so these are not Amazon’s own numbers, we should note. However, Amazon itself did say at year-end 2018 that its broader Alexa ecosystem had grown to “over 70,000” total skills across markets.

In the U.K., the number of Alexa skills rose 233 percent this year to reach 29,910 by year end. In Germany, the skill count grew by 152 percent to reach 7,869 skills. Canada had 22,873 skills as of the beginning of January 2019; Australia has 22,398; Japan has 2,364; and France has 981. (Voicebot says it hasn’t yet set up a system for counting the skills in India, Spain, Mexico or Italy at this time.)

Also of interest is that much of the skill growth occurred near year-end, ahead of the busy holiday season when Alexa devices became top sellers. In the U.S., U.K. and Germany, developers published 181, 84, and 37 skills per day, respectively, during the last two months of the year.

The firm also pointed out there is some debate over whether or not the growth in third-party skills even matters, since so many of them are virtually invisible – never discovered by end users or installed in large numbers. That’s a fair criticism, in a way, but it’s also still early days for voice-based computing. Developers who are today publishing lower-rated skills may be learning from their mistakes and figuring out what works; and they’re doing so, in large numbers, on the Alexa platform.

As to what sort of skills are actually striking a chord with consumers, Amazon itself recently shared that information.

It released a year-end list of Alexa’s “top” skills, which were selected based on a number of factors including customer reviews, engagement, innovation and more, Amazon told us.

Many of the top skills were games. And many had benefited from their association with big-name brands, or had been promoted heavily by Amazon, or both.

Among the top games were music skill Beat the Intro; Heads Up!, already a top paid iOS app from Ellen DeGeneres; National Geographic’s Geo Quiz skill; Question of the Day; Skyrim Very Special Edition; The Magic Door; Trivia Hero; World Mathematics League; Would You Rather for Family; and Volley’s roleplaying game, Yes Sire.

The non-game skills were focused on daily habits, wellness, and – not surprisingly, given Alexa’s central place in consumers’ homes – family fun.

These included kid-friendly skills like Animal Workout, Chompers, Kids Court, Lemonade Stand, and Sesame Street; plus habit and wellness skills like Chop Chop, Fitbit, Headspace, Sleep and Relaxation Sounds, Find My Phone, AnyPod, Big Sky, Make Me Smart, and TuneIn Live.

It’s interesting to note that many of these also are known app names from the mobile app ecosystem, rather than breakout hits that are unique to Alexa or smart speakers. That begs the question as to how much the voice app ecosystem will end up being just a voice-enabled clone of the App Store, versus becoming a home to a new kind of app that truly leverages voice-first design and smart speakers’ capabilities.

It may be a few years before we have that answer, but in the meantime, it seems we have a lot of voice app developers trying to figure that out by building for Alexa.

 

 

Google warns app developers of three malicious SDKs being used for ad fraud

A few days ago, Google removed popular Cheetah Mobile and Kika Tech apps from its Play Store following a BuzzFeed investigation, which discovered the apps were engaging in ad fraud. Today, as a result of Google’s ongoing investigation into the situation, it has discovered three malicious ad network SDKs that were being used to conduct of ad fraud in these apps. The company is now emailing developers who have these SDKs installed in their apps, and demanding their removal. Otherwise, the developers’ apps will be pulled from Google Play, as well.

To be clear, the developers with the SDKs (software development kits) installed aren’t necessarily aware of the SDKs’ malicious nature. In fact, most are likely not, Google says.

Google shared this news in a blog post today, but it didn’t name the SDKs that were involved in the ad fraud scheme.

TechCrunch has learned the ad network SDKs in question are AltaMob, BatMobi and YeahMobi.

Google didn’t share the scale to which these SDKs are being used in Android apps, but based on Google’s blog post, it appears to be taking this situation seriously – which points to the potential scale of this abuse.

“If an app violates our Google Play Developer policies, we take action,” wrote Dave Kleidermacher, VP, Head of Security & Privacy, Android & Play, in the post. “That’s why we began our own independent investigation after we received reports of apps on Google Play accused of conducting app install attribution abuse by falsely claiming credit for newly installed apps to collect the download bounty from that app’s developer,” he said.

The developers will have a short grace period to remove the SDKs from their apps.

The original BuzzFeed report had found that eight apps with a total of 2 billion downloads from Cheetah Mobile and Kika Tech had been exploiting user permissions as part of an ad fraud scheme, according to research from app analytics and research firm Kochava, which was shared with BuzzFeed.

Following the report, Cheetah Mobile apps Battery Doctor and CM Launcher were removed by Cheetah itself. The company additionally issued a press release aimed at reassuring investors that the removal of CM File Manager wouldn’t impact its revenue. It also said it was in discussions with Google to resolve the issues.

As of today, Google’s investigation into these apps is not fully resolved.

But it pulled two apps from Google Play on Monday: Cheetah Mobile’s File Manager and the Kika Keyboard. The apps, the report had said, contained code that was used for ad fraud  – specifically, ad fraud techniques known as click injection and click flooding.

The apps were engaging in app install attribution abuse, which refers to a means of falsely claiming credit for a newly installed app in order to collect the download bounty from the app developer. The three SDKs that Google is now banishing were found to be falsely crediting app installs by creating false clicks.

Combined, the two companies had hundreds of millions of active users, and the two apps that were removed had a combined 250 million installs.

In addition to removing the two apps from Google Play, Google also kicked them out of its AdMob mobile advertising network.

With Cheetah’s voluntary removal of two apps and Google’s booting of two more, a total of four of the eight apps that were conducting ad fraud are now gone from the Google Play store. When Google’s investigation wraps, the other four may be removed as well.

Even more apps could be removed in the future, too, given that Google is demanding that developers now remove the malicious SDKs. Those who fail to comply will get the boot, too.

One resource Google Play publishers, ad attribution providers, and advertisers, may want to take advantage of, going forward, is the Google Play Install Referrer API. This will tell them how their apps were actually installed.

Explains Google in its blog post:

Google Play has been working to minimize app install attribution fraud for several years. In 2017 Google Play made available the Google Play Install Referrer API, which allows ad attribution providers, publishers and advertisers to determine which referrer was responsible for sending the user to Google Play for a given app install. This API was specifically designed to be resistant to install attribution fraud and we strongly encourage attribution providers, advertisers and publishers to insist on this standard of proof when measuring app install ads. Users, developers, advertisers and ad networks all benefit from a transparent, fair system.

“We will continue to investigate and improve our capabilities to better detect and protect against abusive behavior and the malicious actors behind them,” said Kleidermacher.

Google’s Flutter toolkit goes beyond mobile with Project Hummingbird

Flutter, Google’s toolkit for building cross-platform applications, hit version 1.0 today. Traditionally, the project always focused on iOS and Android apps, but as the company announced today, it’s now looking at bringing Flutter to the web, too. That project, currently called Hummingbird, is essentially an experimental web-based implementation of the Flutter runtime.

“From the beginning, we designed Flutter to be a portable UI toolkit, not just a mobile UI toolkit,” Google’s group product manager for Flutter, Tim Sneath, told me. “And so we’ve been experimenting with how we can bring Flutter to different places.”

Hummingbird takes the Dart code that all Flutter applications are written in and then compiles it to JavaScript, which in turn allows the code to run in any modern browser. Developers have always been able to compile Dart to JavaScript, so this part isn’t new, but ensuring that the Flutter engine would work, and bringing all the relevant Flutter features to the web was a major engineering effort. Indeed, Google built three prototypes to see how this could work. Just bringing the widgets over wasn’t enough. A combination of the Flutter widgets and its layout system was also discarded and in the end, the team decided to build a full Flutter web engine that retains all of the layers that sit above the dart:ui library.

“One of the great things about Flutter itself is that it compiles to machine code, to Arm code. But Hummingbird extends that further and says, okay, we’ll also compile to JavaScript and we’ll replace the Flutter engine on the web with the Hummingbird engine which then enables Flutter code to run without changes in web browsers. And that, of course, extends Flutter’s perspective to a whole new ecosystem.”

With tools like Electron, it’s easy enough to bring a web app to the desktop, too, so there’s now also a pathway for bringing Flutter apps to Windows and MacOS that way, though there is already another project in progress to embed Flutter into native desktop apps, too.

It’s worth noting that Google always touted the fact that Flutter compiled to native code — and the speed gains it got from that. Compiling to the web is a bit of a tradeoff, though. Sneath acknowledged as much and stressed that Hummingbird is an experimental project and that Google isn’t releasing any code today. Right now, it’s a technical demonstration.

“If you can go native, you should go native,” he said. “Think of it as an extension of Flutter’s reach rather than a solution to the problem that Flutter itself is solving.”

In its current iteration, the Flutter web engine can handle most apps, but there’s still a lot of work to do to ensure that all widgets run correctly, for example. The team is also looking at building a plugin system and ways to embed Flutter and into existing web apps — and existing web apps into Flutter web apps.

 

Amazon launches ‘Alexa-hosted skills’ for voice app developers

Amazon on Thursday launched a new service aimed at Alexa developers that automatically provisions and helps them to manage a set of AWS cloud resources for their Alexa skill’s backend service. The service is intended to help developers speed the time it takes to launch their skills, by allowing them to focus on their skills’ design and unique features, and not the cloud services they need.

“Previously you had to provision and manage this back-end on your own with a cloud endpoint, resources for media storage, and a code repository,” explained Amazon on its company blog post, announcing the new service. “Alexa-hosted skills offer an easier option. It automatically provisions and hosts an AWS Lambda endpoint, Amazon S3 media storage, and a table for session persistence so that you can get started quickly with your latest project.”

Developers will also be able to use a new code editor in the ASK Developer Console to edit their code, while AWS Lamdba will handle routing the skill request, executing the skill’s code, and managing the skill’s compute resources.

Amazon S3, meanwhile, can be used for things the skill needs to store – like media files, such as the images being used for the skill’s Echo Show, Echo Spot and Fire TV versions.

The service comes at a time when Amazon Alexa and Google Home are in a race to grab market share – and mind share – in the smart speaker industry. A lot of this will come down to how useful these devices are for customers – and well-designed skills are a part of that.

Smart speaker adoption is growing fast in the U.S., having recently reaching 57.8 million adults, according to a report from Voicebot. But in terms of third-party development of voice apps, Amazon leads Google Home, having passed 40,000 U.S. skills in September.

Amazon says Alexa-hosted skills are available to developers in all Alexa locales. Developers can apply to join the preview here.

Alexa’s skills can now work together, initially for bookings and printing

Amazon this morning announced a new way for customers to use Alexa’s skills – together, in combined requests. That is, you can start a request in one skill, then have it fulfilled in another. For example, the AllRecipes skill can now connect the HP skill in order to print out recipes for customers, Amazon says.

This is the first of many combined skills to come.

Skill Connections, as the developer-facing feature is called, can initially be used to take three types of actions – printing, booking a reservation, or booking a ride.

That means future skills could allow you to book a concert ticket through a skill, then connect to a taxi skill to find you a ride to the show. The idea is that customer wouldn’t have to separately invoke the different skills to complete the one task they wanted to accomplish (i.e., going to a show), or repeat information. Instead, data is passed from one skill to the next.

This isn’t the first time Alexa has tried to tie skills together in some way, but it is the first time it actually allowed two skills to talk to one another. Previously, Alexa was making game recommendations when customers exited a skill, as a means of exposing Alexa users to new content they may like. But this was more of a nudge to launch another skill, not a direct connection between the two.

Skill Connections is launching into a developer preview starting today. During this testing period, printing will be provided by a skill from HP, food reservations will be provided by OpenTable, and taxi reservations will be provided by Uber. Epson and Canon will soon provide prints services as well, Amazon notes.

The skills can also take advantage of Amazon Pay for Alexa Skills and in-skill purchasing announced earlier this year.

Developers who are accepted into the preview could do things like offer to print a game’s leaderboard using the HP skill, or book a taxi to a spot where you’ve made a reservation, Amazon also suggests. To be considered, developers first have to fill out a survey.

Developers can apply either to connect their skill to those from HP, OpenTable or Uber, or they can apply to provide services to other skills. The feature will remain in testing for now, with a public launch planned for a later, but yet unknown date.

 

Amazon introduces APL, a new design language for building Alexa skills for devices with screens

Along with the launch of the all-new Echo Show, the Alexa-powered device with a screen, Amazon also introduced a new design language for developers who want to build voice skills that include multimedia experiences.

Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.

In addition, third-party devices with screens will be able to take advantage of APL through the Alexa Smart Screen and TV Device SDK, arriving in the months ahead. Sony and Lenovo will be putting this to use first.

Voice-based skill experiences can sometimes feel limited because of their lack of a visual component. For example, a cooking skill would work better if it just showed the steps as Alexa guided users through them. Other skills could simply benefit from visual cues or other complementary information, like lists of items.

Amazon says it found that Alexa skills that use visual elements are used twice as much as voice-only skills, which is why it wanted to improve the development of these visual experiences.

The new language was built from the ground up specifically for adapting Alexa skills for different screen-based, voice-first experiences.

At launch, APL supports experiences that include text, graphics, and slideshows, with video support coming soon. Developers could do things like sync the on-screen text and images with Alexa’s spoken voice. Plus, the new skills built with this language could allow for both voice commands, as well as input through touch or remote controls, if available.

The language is also designed to be flexible in terms of the placement of the graphics or other visual elements, so companies can adhere to their brand guidelines, Amazon says. And it’s adaptable to many different types of screen-based devices, including those with different sized screens or varying memory or processing capabilities.

When introducing the new language at an event in Seattle this morning, Amazon said that APL will feel familiar to anyone who’s used to working with front-end development, as it adheres to universally understood styling practices and using similar syntax.

Amazon is also providing sample APL documents to help developers get started, which can be used as-is or can be modified. Developers can choose to build their own from scratch, as well.

These APL documents are JSON files sent from a skill to a device. The device will then evaluate the document, import the images and other data, then render the experience. Developers can use elements like images, text, and scrollviews, pages, sequences, layouts, conditional expressions, speech synchronization, and other commands. Support for video, audio and HTML5 are coming soon.

“This year alone, customers have interacted with visual skills hundreds of millions of times. You told us you want more design flexibility -in both content and layout – and the ability to optimize experiences for the growing family of Alexa devices with screens,” said Nedium Fresko, VP of Alexa Devices and Developer Technologies, in a statement. “With the Alexa Presentation Language, you can unleash your creativity and build interactive skills that adapt to the unique characteristics of Alexa Smart Screen devices,” he said.

A handful of skills have already put APL to use, including a CNBC skill that shows a graph of stock performance; Big Sky that shows images to accompany its weather forecasts; NextThere, which lets you view public transit schedules; Kayak, which shows slideshows of travel destinations; Food Network, which shows recipes, and several others.

Alexa device owners will be able to use these APL-powered skills starting next month. The Developer Preview for APL starts today.

Check out our full coverage from the event here.

6 million users had installed third-party Twitter clients

Twitter tried to downplay the impact deactivating its legacy APIs would have on its community and the third-party Twitter clients preferred by many power users, by saying that “less than 1%” of Twitter developers were using these old APIs. Twitter is correct in its characterization of the size of this developer base, but it’s overlooking millions of third-party app users in the process. According to data from Sensor Tower, 6 million App Store and Google Play users installed the top five third-party Twitter clients between January 2014 to July 2018.

Over the past year, these top third-party apps were downloaded 500,000 times.

This data is largely free of reinstalls, the firm also said.

The top third-party Twitter apps users installed over the past three and a half years have included: Twitterrific, Echofon, TweetCaster, Tweetbot, and Ubersocial.

Of course, some portion of those users may have since switched to the Twitter’s native app for iOS or Android, or they may run both a third-party app and Twitter’s own app in parallel.

Even if only some of these six million users remain, they represent a small, vocal, and – in some cases, prominent – user base. It’s one that is very upset right now, too. And for a company that just posted a loss of 1 million users during its last earnings, it seems odd that Twitter would not figure out a way to accommodate this crowd, or even bring them onboard its new API platform to make money from them.

Twitter, apparently, is weighing data and facts, not user sentiment and public perception when it made this decision. But some things have more value than numbers on a spreadsheet. They are part of a company’s history and culture. Of course, Twitter has every right to blow all that up and move on, but that doesn’t make it the right decision.

To be fair, Twitter is not lying when it says this is a small group. The third-party user base is tiny compared with Twitter’s native app user base. During the same time that 6 million people were downloading third-party apps, the official Twitter app was installed a whopping 560 million times across iOS and Android. That puts the third-party apps’ share of installs at about 1.1% of the total.

That user base may have been shrinking over the years, too. During the past year, while the top third-party apps were installed half a million times, Twitter’s app was installed 117 million times. This made third-party apps’ share only about 0.4% of downloads, giving the official app a 99% market share.

But third-party app developers and the apps’ users are power users. Zealots, even. Evangelists.

Twitter itself credited them with pioneering “product features we all know and love” like the mute option, pull-to-refresh, and more. That means the apps’ continued existence brings more value to Twitter’s service than numbers alone can show.

Image credit: iMore

They are part of Twitter’s history. You can even credit one of the apps for Twitter’s logo! Initially, Twitter only had a typeset version of its name. Then Twitterrific came along and introduced a bird for its logo. Twitter soon followed.

Twitterrific was also the first to use the word “tweet,” which is now standard Twitter lingo. (The company used “twitter-ing” Can you imagine?)

These third-party apps also play a role in retaining users who struggle with the new user experience Twitter has adopted – its algorithmic timeline. Instead, the apps offer a chronological view of tweets, as some continue to prefer.

Twitter decision to cripple these developers’ apps is shameful.

It shows a lack of respect for Twitter’s history, its power user base, its culture of innovation, and its very own nature as a platform, not a destination.

P.S.

twitterrific

Google Play now makes it easier to manage your subscriptions

Mobile app subscriptions are a big business, but consumers sometimes hesitate to sign up because pausing and cancelling existing subscriptions hasn’t been as easy as opting in. Google is now addressing those concerns with the official launch of its subscription center for Android users. The new feature centralizes all your Google Play subscriptions, and offers a way for you to find others you might like to try.

The feature was first introduced at Google’s I/O developer conference in May, and recently rolled out to Android users, the company says. However, Google hadn’t formally announced its arrival until today.

Access to the subscriptions center only takes one tap – the link is directly available from the “hamburger” menu in the Play Store app.

Apple’s page for subscription management, by comparison, is far more tucked away.

On iOS, you have to tap on your profile icon in the App Store app, then tap on your name. This already seem unintuitive – especially considering that a link to “Purchases” is on this Account screen. Why wouldn’t Subscriptions be here, too? But instead, you have to go to the next screen, then scroll down to near the bottom to find “Subscriptions” and tap that. To turn any individual subscription off, you have to go to its own page, scroll to the bottom and tap “Cancel.”

This process should be more streamlined for iOS users.

In Google Play’s Subscriptions center, you can view all your existing subscriptions, cancel them, renew them, or even restore those you had previously cancelled – perfect for turning HBO NOW back on when “Game of Thrones” returns, for example.

You can also manage and update your payment methods, and set up a backup method.

Making it just as easy for consumers to get out of their subscriptions as it is to sign up is a good business practice, and could boost subscription sign-ups overall, which benefits developers. When consumers aren’t afraid they’ll forget or not be able to find the cancellation options later on, they’re more likely to give subscriptions a try.

In addition, developers can now create deep links to their subscriptions which they can distribute across the web, email, and social media. This makes it easier to direct people to their app’s subscription management page directly. When users cancel, developers can also trigger a survey to find out why – and possibly tweak their product offerings a result of this user feedback.

There’s also a new subscription discovery section that will help Android users find subscription-based apps through both curated and localized collections, Google notes.

These additional features, along with a good handful of subscription management tools for developers, were all previously announced at I/O but weren’t in their final state at the time. Google had cautioned that it may tweak the look-and-feel of the product between the developer event and the public launch, but it looks the same as what was shown before – right down to the demo subscription apps.

Subscriptions are rapidly becoming a top way for developers to generate revenue for their applications. Google says subscribers are growing at more than 80 percent year-over-year. Sensor Tower also reported that app revenue grew 35 percent to $60 billion in 2017, in part thanks to the growth in subscriptions.

Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

App developers get their wish with expanded support for free trials

A group of Apple developers recently banded together as a group called “The Developers Union” in order to plead with Apple, en masse, to allow them to offer free trials of their apps to end users. While not a traditional union with dues, it represented the first time a large group of developers pushed back at Apple’s control of the App Store’s policies. Today, it seems, the developers are having their voices heard.

In Apple’s newly updated App Store guidelines, the company has changed its policy around free trials. Previously, it allowed free trials of subscription-based apps, but now any app can offer a free trial.

The change, spotted by 9to5Mac, clarifies how this system will operate.

Apple says developers of non-subscription apps may offer a “free time-based trial period” before presenting a full unlock option by setting up a non-consumable in-app purchase that doesn’t cost any money.

The in-app purchase must specify the time frame the trial is being offered, and clearly explain to users what content and services it includes.

While Apple may have already been considering support for free trials for all apps, it’s notable that the change followed The Developer Union’s open letter on this matter. That gives the appearance, at least, that the developers had some sway. This is important because the group says they plan to advocate for other changes in the future, including a “more reasonable revenue cut” and “other community-driven, developer-friendly changes.”

As for the request for free trial support, there are currently 636 apps backing this cause on the union’s website – and the majority are indie developers looking to grow their businesses, not the major players.

Their letter specifically asked Apple to commit to “allowing free trials for all apps for the App Stores before July 2019.”

The updated support for free trials wasn’t the only significant change in the new App Store guidelines.

Apple also added a new section that requires apps to implement “appropriate security measures” for handling user data – a rule that could allow it to boot out shadier applications. Another privacy-related change said in-app ads can’t target “sensitive user data” and have to be age-appropriate.

The company addressed the situation with the rejection of the Steam Link app, as well, by saying that cross-platform apps may allow users to access content acquired on the other platforms, but only if it’s also available as an in-app purchase.

And Apple spelled out that apps cannot mine for cryptocurrency in the background, and explained how crypto apps should operate:

(i) Wallets: Apps may facilitate virtual currency storage, provided they are offered by developers enrolled as an organization.

(ii) Mining: Apps may not mine for cryptocurrencies unless the processing is performed off device (e.g. cloud-based mining).

(iii) Exchanges: Apps may facilitate transactions or transmissions of cryptocurrency on an approved exchange, provided they are offered by the exchange itself.

(iv) Initial Coin Offerings: Apps facilitating Initial Coin Offerings (“ICOs”), cryptocurrency futures trading, and other crypto-securities or quasi-securities trading must come from established banks, securities firms, futures commission merchants (“FCM”), or other approved financial institutions and must comply with all applicable law.

(v) Cryptocurrency apps may not offer currency for completing tasks, such as downloading other apps, encouraging other users to download, posting to social networks, etc.