Alexa’s skills can now work together, initially for bookings and printing

Amazon this morning announced a new way for customers to use Alexa’s skills – together, in combined requests. That is, you can start a request in one skill, then have it fulfilled in another. For example, the AllRecipes skill can now connect the HP skill in order to print out recipes for customers, Amazon says.

This is the first of many combined skills to come.

Skill Connections, as the developer-facing feature is called, can initially be used to take three types of actions – printing, booking a reservation, or booking a ride.

That means future skills could allow you to book a concert ticket through a skill, then connect to a taxi skill to find you a ride to the show. The idea is that customer wouldn’t have to separately invoke the different skills to complete the one task they wanted to accomplish (i.e., going to a show), or repeat information. Instead, data is passed from one skill to the next.

This isn’t the first time Alexa has tried to tie skills together in some way, but it is the first time it actually allowed two skills to talk to one another. Previously, Alexa was making game recommendations when customers exited a skill, as a means of exposing Alexa users to new content they may like. But this was more of a nudge to launch another skill, not a direct connection between the two.

Skill Connections is launching into a developer preview starting today. During this testing period, printing will be provided by a skill from HP, food reservations will be provided by OpenTable, and taxi reservations will be provided by Uber. Epson and Canon will soon provide prints services as well, Amazon notes.

The skills can also take advantage of Amazon Pay for Alexa Skills and in-skill purchasing announced earlier this year.

Developers who are accepted into the preview could do things like offer to print a game’s leaderboard using the HP skill, or book a taxi to a spot where you’ve made a reservation, Amazon also suggests. To be considered, developers first have to fill out a survey.

Developers can apply either to connect their skill to those from HP, OpenTable or Uber, or they can apply to provide services to other skills. The feature will remain in testing for now, with a public launch planned for a later, but yet unknown date.

 

Amazon introduces APL, a new design language for building Alexa skills for devices with screens

Along with the launch of the all-new Echo Show, the Alexa-powered device with a screen, Amazon also introduced a new design language for developers who want to build voice skills that include multimedia experiences.

Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.

In addition, third-party devices with screens will be able to take advantage of APL through the Alexa Smart Screen and TV Device SDK, arriving in the months ahead. Sony and Lenovo will be putting this to use first.

Voice-based skill experiences can sometimes feel limited because of their lack of a visual component. For example, a cooking skill would work better if it just showed the steps as Alexa guided users through them. Other skills could simply benefit from visual cues or other complementary information, like lists of items.

Amazon says it found that Alexa skills that use visual elements are used twice as much as voice-only skills, which is why it wanted to improve the development of these visual experiences.

The new language was built from the ground up specifically for adapting Alexa skills for different screen-based, voice-first experiences.

At launch, APL supports experiences that include text, graphics, and slideshows, with video support coming soon. Developers could do things like sync the on-screen text and images with Alexa’s spoken voice. Plus, the new skills built with this language could allow for both voice commands, as well as input through touch or remote controls, if available.

The language is also designed to be flexible in terms of the placement of the graphics or other visual elements, so companies can adhere to their brand guidelines, Amazon says. And it’s adaptable to many different types of screen-based devices, including those with different sized screens or varying memory or processing capabilities.

When introducing the new language at an event in Seattle this morning, Amazon said that APL will feel familiar to anyone who’s used to working with front-end development, as it adheres to universally understood styling practices and using similar syntax.

Amazon is also providing sample APL documents to help developers get started, which can be used as-is or can be modified. Developers can choose to build their own from scratch, as well.

These APL documents are JSON files sent from a skill to a device. The device will then evaluate the document, import the images and other data, then render the experience. Developers can use elements like images, text, and scrollviews, pages, sequences, layouts, conditional expressions, speech synchronization, and other commands. Support for video, audio and HTML5 are coming soon.

“This year alone, customers have interacted with visual skills hundreds of millions of times. You told us you want more design flexibility -in both content and layout – and the ability to optimize experiences for the growing family of Alexa devices with screens,” said Nedium Fresko, VP of Alexa Devices and Developer Technologies, in a statement. “With the Alexa Presentation Language, you can unleash your creativity and build interactive skills that adapt to the unique characteristics of Alexa Smart Screen devices,” he said.

A handful of skills have already put APL to use, including a CNBC skill that shows a graph of stock performance; Big Sky that shows images to accompany its weather forecasts; NextThere, which lets you view public transit schedules; Kayak, which shows slideshows of travel destinations; Food Network, which shows recipes, and several others.

Alexa device owners will be able to use these APL-powered skills starting next month. The Developer Preview for APL starts today.

Check out our full coverage from the event here.

6 million users had installed third-party Twitter clients

Twitter tried to downplay the impact deactivating its legacy APIs would have on its community and the third-party Twitter clients preferred by many power users, by saying that “less than 1%” of Twitter developers were using these old APIs. Twitter is correct in its characterization of the size of this developer base, but it’s overlooking millions of third-party app users in the process. According to data from Sensor Tower, 6 million App Store and Google Play users installed the top five third-party Twitter clients between January 2014 to July 2018.

Over the past year, these top third-party apps were downloaded 500,000 times.

This data is largely free of reinstalls, the firm also said.

The top third-party Twitter apps users installed over the past three and a half years have included: Twitterrific, Echofon, TweetCaster, Tweetbot, and Ubersocial.

Of course, some portion of those users may have since switched to the Twitter’s native app for iOS or Android, or they may run both a third-party app and Twitter’s own app in parallel.

Even if only some of these six million users remain, they represent a small, vocal, and – in some cases, prominent – user base. It’s one that is very upset right now, too. And for a company that just posted a loss of 1 million users during its last earnings, it seems odd that Twitter would not figure out a way to accommodate this crowd, or even bring them onboard its new API platform to make money from them.

Twitter, apparently, is weighing data and facts, not user sentiment and public perception when it made this decision. But some things have more value than numbers on a spreadsheet. They are part of a company’s history and culture. Of course, Twitter has every right to blow all that up and move on, but that doesn’t make it the right decision.

To be fair, Twitter is not lying when it says this is a small group. The third-party user base is tiny compared with Twitter’s native app user base. During the same time that 6 million people were downloading third-party apps, the official Twitter app was installed a whopping 560 million times across iOS and Android. That puts the third-party apps’ share of installs at about 1.1% of the total.

That user base may have been shrinking over the years, too. During the past year, while the top third-party apps were installed half a million times, Twitter’s app was installed 117 million times. This made third-party apps’ share only about 0.4% of downloads, giving the official app a 99% market share.

But third-party app developers and the apps’ users are power users. Zealots, even. Evangelists.

Twitter itself credited them with pioneering “product features we all know and love” like the mute option, pull-to-refresh, and more. That means the apps’ continued existence brings more value to Twitter’s service than numbers alone can show.

Image credit: iMore

They are part of Twitter’s history. You can even credit one of the apps for Twitter’s logo! Initially, Twitter only had a typeset version of its name. Then Twitterrific came along and introduced a bird for its logo. Twitter soon followed.

Twitterrific was also the first to use the word “tweet,” which is now standard Twitter lingo. (The company used “twitter-ing” Can you imagine?)

These third-party apps also play a role in retaining users who struggle with the new user experience Twitter has adopted – its algorithmic timeline. Instead, the apps offer a chronological view of tweets, as some continue to prefer.

Twitter decision to cripple these developers’ apps is shameful.

It shows a lack of respect for Twitter’s history, its power user base, its culture of innovation, and its very own nature as a platform, not a destination.

P.S.

twitterrific

Google Play now makes it easier to manage your subscriptions

Mobile app subscriptions are a big business, but consumers sometimes hesitate to sign up because pausing and cancelling existing subscriptions hasn’t been as easy as opting in. Google is now addressing those concerns with the official launch of its subscription center for Android users. The new feature centralizes all your Google Play subscriptions, and offers a way for you to find others you might like to try.

The feature was first introduced at Google’s I/O developer conference in May, and recently rolled out to Android users, the company says. However, Google hadn’t formally announced its arrival until today.

Access to the subscriptions center only takes one tap – the link is directly available from the “hamburger” menu in the Play Store app.

Apple’s page for subscription management, by comparison, is far more tucked away.

On iOS, you have to tap on your profile icon in the App Store app, then tap on your name. This already seem unintuitive – especially considering that a link to “Purchases” is on this Account screen. Why wouldn’t Subscriptions be here, too? But instead, you have to go to the next screen, then scroll down to near the bottom to find “Subscriptions” and tap that. To turn any individual subscription off, you have to go to its own page, scroll to the bottom and tap “Cancel.”

This process should be more streamlined for iOS users.

In Google Play’s Subscriptions center, you can view all your existing subscriptions, cancel them, renew them, or even restore those you had previously cancelled – perfect for turning HBO NOW back on when “Game of Thrones” returns, for example.

You can also manage and update your payment methods, and set up a backup method.

Making it just as easy for consumers to get out of their subscriptions as it is to sign up is a good business practice, and could boost subscription sign-ups overall, which benefits developers. When consumers aren’t afraid they’ll forget or not be able to find the cancellation options later on, they’re more likely to give subscriptions a try.

In addition, developers can now create deep links to their subscriptions which they can distribute across the web, email, and social media. This makes it easier to direct people to their app’s subscription management page directly. When users cancel, developers can also trigger a survey to find out why – and possibly tweak their product offerings a result of this user feedback.

There’s also a new subscription discovery section that will help Android users find subscription-based apps through both curated and localized collections, Google notes.

These additional features, along with a good handful of subscription management tools for developers, were all previously announced at I/O but weren’t in their final state at the time. Google had cautioned that it may tweak the look-and-feel of the product between the developer event and the public launch, but it looks the same as what was shown before – right down to the demo subscription apps.

Subscriptions are rapidly becoming a top way for developers to generate revenue for their applications. Google says subscribers are growing at more than 80 percent year-over-year. Sensor Tower also reported that app revenue grew 35 percent to $60 billion in 2017, in part thanks to the growth in subscriptions.

Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

App developers get their wish with expanded support for free trials

A group of Apple developers recently banded together as a group called “The Developers Union” in order to plead with Apple, en masse, to allow them to offer free trials of their apps to end users. While not a traditional union with dues, it represented the first time a large group of developers pushed back at Apple’s control of the App Store’s policies. Today, it seems, the developers are having their voices heard.

In Apple’s newly updated App Store guidelines, the company has changed its policy around free trials. Previously, it allowed free trials of subscription-based apps, but now any app can offer a free trial.

The change, spotted by 9to5Mac, clarifies how this system will operate.

Apple says developers of non-subscription apps may offer a “free time-based trial period” before presenting a full unlock option by setting up a non-consumable in-app purchase that doesn’t cost any money.

The in-app purchase must specify the time frame the trial is being offered, and clearly explain to users what content and services it includes.

While Apple may have already been considering support for free trials for all apps, it’s notable that the change followed The Developer Union’s open letter on this matter. That gives the appearance, at least, that the developers had some sway. This is important because the group says they plan to advocate for other changes in the future, including a “more reasonable revenue cut” and “other community-driven, developer-friendly changes.”

As for the request for free trial support, there are currently 636 apps backing this cause on the union’s website – and the majority are indie developers looking to grow their businesses, not the major players.

Their letter specifically asked Apple to commit to “allowing free trials for all apps for the App Stores before July 2019.”

The updated support for free trials wasn’t the only significant change in the new App Store guidelines.

Apple also added a new section that requires apps to implement “appropriate security measures” for handling user data – a rule that could allow it to boot out shadier applications. Another privacy-related change said in-app ads can’t target “sensitive user data” and have to be age-appropriate.

The company addressed the situation with the rejection of the Steam Link app, as well, by saying that cross-platform apps may allow users to access content acquired on the other platforms, but only if it’s also available as an in-app purchase.

And Apple spelled out that apps cannot mine for cryptocurrency in the background, and explained how crypto apps should operate:

(i) Wallets: Apps may facilitate virtual currency storage, provided they are offered by developers enrolled as an organization.

(ii) Mining: Apps may not mine for cryptocurrencies unless the processing is performed off device (e.g. cloud-based mining).

(iii) Exchanges: Apps may facilitate transactions or transmissions of cryptocurrency on an approved exchange, provided they are offered by the exchange itself.

(iv) Initial Coin Offerings: Apps facilitating Initial Coin Offerings (“ICOs”), cryptocurrency futures trading, and other crypto-securities or quasi-securities trading must come from established banks, securities firms, futures commission merchants (“FCM”), or other approved financial institutions and must comply with all applicable law.

(v) Cryptocurrency apps may not offer currency for completing tasks, such as downloading other apps, encouraging other users to download, posting to social networks, etc.

 

Amazon opens up in-skill purchases to all Alexa developers

Amazon today launched in-skill purchasing to all Alexa developers. That means developers have a way to generate revenue from their voice applications on Alexa-powered devices, like Amazon’s Echo speakers. For example, developers could charge for additional packs to go along with their voice-based games, or offer other premium content to expand their free voice app experience.

The feature was previously announced in November 2017, but was only available at the time to a small handful of voice app developers, like Jeopardy!, plus other game publishers.

When in-skill purchasing is added to a voice application – Amazon calls these apps Alexa’s “skills” – customers can ask to shop the purchase suggestions offered, and then pay by voice using the payment information already associated with their Amazon account.

Developers are in control of what content is offered at which price, but Amazon will handle the actual purchasing flow. It also offers self-serve tools to help developers manage their in-skill purchases and optimize their sales.

While any Alexa device owner can buy the available in-skill purchases, Amazon Prime members will get the best deal.

Amazon says that in-skill purchases must offer some sort of value-add for Prime subscribers, like a discounted price, exclusive content or early access. Developers are paid 70 percent of the list price for their in-skill purchase, before any Amazon discount is applied.

Already, Sony’s Jeopardy!, Teen Jeopardy!, Sports Jeopardy!; The Ellen Show’s Heads Up; Fremantle’s Match Game; HISTORY’s Ultimate HISTORY Quiz, and TuneIn Live, have launched Alexa skills with premium content.

To kick off today’s launch of general availability, Amazon is announcing a handful of others who will do the same. This includes NBCU’s SYFY WIRE, which will offer three additional weekly podcasts exclusive to Alexa (Geeksplain, Debate Club, and Untold Story); Volley Inc.’s Yes Sire, which offers an expansion pack for its role-playing game; and Volley Inc.’s Word of the Day, which will soon add new vocabulary packs to purchase.

In-skill purchases is only one of the ways that Amazon offers a way for developers to generate revenue.

The company is also now offering a way for brands and merchants to sell products and services (like event tickets or flower delivery) through Alexa, using Amazon Pay for Alexa Skills. And it’s been paying top developers directly through its Developer Rewards program, which is an attempt to seed the ecosystem with skills ahead of a more robust system for skill monetization.

The news was announced alongside an update on Alexa’s skill ecosystem, which has 40,000 skills available, up from 25,000 last December.

However, the ecosystem today has a very long tail. Many of the skills are those with few or even no users, or just represent apps from those toying around with voice app development. Research on how customers are actually engaging with their voice devices has shown that generally, people are largely using them for things like news and information, smart home control, and setting timers and reminders – not necessarily things that require voice apps.

 

GitLab gets a native integration with Google’s Kubernetes Engine

GitLab, one of the most popular self-hosted Git services, has been on a bit of a roll lately. Barely two weeks after launching its integration with GitHub, the company today announced that developers on its platform can now automatically spin up a cluster on Google’s Kubernetes Engine (GKE) and deploy their applications to it with just a few clicks.

To build this feature, the company collaborated with Google, but this integration also makes extensive use of GitLab’s existing Auto DevOps tools, which already offers a similar functionality for working with containers. Auto DevOps aims to take all the grunt work out of setting up CI/CD pipelines and deploying to containers.

“Before the GKE integration, GitLab users needed an in-depth understanding of Kubernetes to manage their own clusters,” said GitLab CEO Sid Sijbrandij in today’s announcement. “With this collaboration, we’ve made it simple for our users to set up a managed deployment environment on [Google Cloud Platform] and leverage GitLab’s robust Auto DevOps capabilities.”

To make use of the GKE integration, developers only have to connect to their Google accounts from GitLab. Since GKE automatically manages the cluster, developers will ideally be able to fully focus on writing their application and leave the deployment and management to GitLab and Google.

These new features, which are part of the GitLab 10.6 release are now available to all GitLab users.

 

App Store shrank for first time in 2017 thanks to crackdowns on spam, clones and more

The App Store shrank for the first time in 2017, according to a new report from Appfigures. The report found the App Store lost 5 percent of its total apps over the course of the year, dropping from 2.2 million iOS apps in the beginning of the year, to 2.1 million by year-end.

Google Play, meanwhile, grew in 2017 – it was up 30% to more than 3.6 million apps.

Appfigures speculated the changes had to do with a combination of factors, including stricter enforcement of Apple’s review guidelines, along with a technical change requiring app developers to update their apps to the 64-bit architecture.

Apple had also promised back in 2016 that it would clean up its iOS App Store by removing outdated, abandoned apps, including those that no longer met current guidelines or didn’t function as intended. That cleanup may have well stretched into 2017, as app store intelligence firms only started seeing the effects in late 2016. For example, there was a spike in app removals back in October 2016.

Then in 2017, Apple went after clones and spam apps on the App Store. Combined with those apps that weren’t 64-compatible and those that hadn’t been downloaded in years, the removals reached into the hundreds of thousands over a twelve month period. Apple later went after template-based apps, too, before dialing back its policies over concerns it was impacting small businesses ability to compete on the App Store.

To see the App Store shrink, given these clear-outs, isn’t necessarily surprising. However, Appfigures found that removals of existing apps weren’t the only cause. iOS developers weren’t releasing as many apps as they had during the growth years, it also claims.

Android developers launched 17 percent more apps in 2017 to reach 1.5 million total new releases. But iOS developers launched just 755,00 new apps – a 29 percent drop and the largest drop since 2008.

But this doesn’t necessarily mean developers weren’t creating as many iOS app – it could mean that Apple’s review team has gotten tougher about how many apps it allows in. Thanks to the spam and clone app crackdown, fewer apps of questionable quality are being approved these days.

In addition, some portion of the new Android app releases during the year were iOS apps being ported to the Google Play platform. More than twice as many apps came to Android on 2017, than Android apps coming to iOS, the report said.

The full report also developed into the numbers of cross-platform apps (450K are on both stores), the most popular non-native tools (Cordova and Unity), the rise in native development, the countries shipping the most apps (U.S. followed by China), and the Play Store’s growth.

It can be viewed here.

 

Facebook opens Instant Games to all developers

Facebook’s Instant Games are now open to all developers, Facebook announced this week in advance of the Game Developers Conference. First launched in 2016, the platform lets developers build mobile-friendly games using HTML5 that work on both Facebook and Messenger, instead of requiring users to download native apps from Apple or Google’s app stores.

The Instant Games platform kicked off its launch a couple of years ago with 17 games from developers like Bandai Namco, Konami, Taito, Zynga and King, who offered popular titles like Pac-Man, Space Invaders, and Words with Friends. The following year, the platform had grown to 50 titles and became globally available. But it wasn’t open to all – only select partners.

In addition to getting users to spend more time on Facebook’s platform, Instant Games provides Facebook with the potential for new revenue streams now that Facebook is moving into game monetization.

In October, Facebook said it would begin to test interstitial and rewarded video ads, as well as in-app purchases. The tools were only available to select developers on what was then an otherwise closed platform for Facebook’s gaming partners.

Now, says Facebook, all developers can build Instant Games as the platform exits its beta testing period.

Alongside this week’s public launch, Facebook introduced a handful of new features to help developers grow, measure and monetize their games.

This includes the launch of the ads API, which was also previously in beta.

In-app purchases, however, are continuing to be tested.

Developers will also have access to Facebook’s Monetization Manager, where they can track manage ads and track how well ad placements are performing; as well as a Game Switch API for cross-promoting games across the platform, or creating deep links that work outside Facebook and Messenger.

Facebook says it also updated how its ranking algorithm surfaces games based on users’ recent play and interests, and updated its in-game leaderboards, among other things.

Soon, Instant Game developers will be able to build ad campaigns in order to acquire new players from Facebook. These new ad units, when clicked, will take players directly into the game where they can begin playing. 

Since last year, Facebook Instant Games have grown to nearly 200 titles, but the company isn’t talking in-depth about their performance from a revenue perspective.

It did offer one example of a well-performing title, Basketball FRVR, which is on track to make over 7-digits in ad revenue annually, and has been played over 4.2 billion times.

With the public launch, Facebook is offering Instant Games developer documentation page and a list of recommended HTML5 game engines to help developers get started. Developers can then build and submit games via Facebook’s App page.