Spotify stays quiet about launch of its voice command ‘Hey Spotify’ on mobile

In 2019, Spotify began testing a hardware device for automobile owners it lovingly dubbed “Car Thing,” which allowed Spotify Premium users to play music and podcasts using voice commands that began with “Hey, Spotify.” Last year, Spotify began developing a similar voice integration into its mobile app. Now, access to the “Hey Spotify” voice feature is rolling out more broadly.

Spotify chose not to officially announce the new addition, despite numerous reports indicating the voice option was showing up for many people in their Spotify app, leading to some user confusion about availability.

One early report by GSM Arena, for example, indicated Android users had been sent a push notification that alerted them to the feature. The notification advised users to “Just enable your mic and say ‘Hey Spotify, Play my Favorite Songs.” When tapped, the notification launched Spotify’s new voice interface where users are pushed to first give the app permission to use the microphone in order to be able to verbally request the music they want to hear.

Several outlets soon reported the feature had launched to Android users, which is only partially true.

As it turns out, the feature is making its way to iOS devices, as well. When we launched the Spotify app here on an iPhone running iOS 14.5, for instance, we found the same feature had indeed gone live. You just tap on the microphone button by the search box to get to the voice experience. We asked around and found that other iPhone users on various versions of the iOS operating system also had the feature, including free users, Premium subscribers and Premium Family Plan subscribers.

The screen that appears suggests in big, bold text that you could be saying “Hey Spotify, play…” followed by a random artist’s name. It also presents a big green button at the bottom to turn on “Hey Spotify.”

Once enabled, you can ask for artists, albums, songs and playlists by name, as well as control playback with commands like stop, pause, skip this song, go back and others. Spotify confirms the command with a robotic-sounding male voice by default. (You can swap to a female voice in Settings, if you prefer.)

Image Credits: Spotify screenshot iOS

This screen also alerts users that when the app hears the “Hey Spotify” voice command, it sends the user’s voice data and other information to Spotify. There’s a link to Spotify policy regarding its use of voice data, which further explains that Spotify will collect recordings and transcripts of what you say along with information about the content it returned to you. The company says it may continue to use this data to improve the feature, develop new voice features and target users with relevant advertising. It may also share your information with service providers, like cloud storage providers.

The policy looks to be the same as the one that was used along with Spotify’s voice-enabled ads, launched last year, so it doesn’t seem to have been updated to fully reflect the changes enabled with the launch of “Hey Spotify.” However, it does indicate that, like other voice assistants, Spotify doesn’t just continuously record — it waits until users say the wake words.

Given the “Hey Spotify” voice command’s origins with “Car Thing,” there’s been speculation that the mobile rollout is a signal that the company is poised to launch its own hardware to the wider public in the near future. There’s already some indication that may be true — MacRumors recently reported finding references and photos to Car Thing and its various mounts inside the Spotify app’s code. This follows Car Thing’s reveal in FCC filings back in January of this year, which had also stoked rumors that the device was soon to launch.

Spotify was reached for comment this morning, but has yet been unable to provide any answers about the feature’s launch despite a day’s wait. Instead, we were told that they “unfortunately do not have any additional news to share at this time.” That further suggests some larger projects could be tied to this otherwise more minor feature’s launch.

Though today’s consumers are wary of tech companies’ data collection methods — and particularly their use of voice data after all three tech giants confessed to poor practices on this front — there’s still a use case for voice commands, particularly from an accessibility standpoint and, for drivers, from a safety standpoint.

And although you can direct your voice assistant on your phone (or via CarPlay or Android Auto, if available) to play content from Spotify, some may find it useful to be able to speak to Spotify directly — especially since Apple doesn’t allow Spotify to be set as a default music service. You can only train Siri to launch Spotify as your preferred service.

If, however, you have second thoughts about using the “Hey Spotify” feature after enabling it, you can turn it off under “Voice Interactions” in the app’s settings.

Sendbird raises $100M at a $1B+ valuation, says 150M+ users now interact using its chat and video APIs

Messaging is the medium these days, and today a startup that has built an API to help others build text and video interactivity into their services is announcing a big round to continue scaling its business. Sendbird, a popular provider of chat, video and other interactive services to the likes of Reddit, Hinge, Paytm, Delivery Hero and hundreds of others by way of a few lines of code, has closed a round of $100 million, money that it plans to use to continue expanding the functionalities of its platform to meet our changing interactive times. Sendbird has confirmed that the funding values the company at $1.05 billion.

Today, customers collectively channel some 150 million users through Sendbird’s APIs to chat with each other and large groups of users over text and video, a figure that has seen a lot of growth in particular in the last year, where people were spending so much more time in front of screens as their primary interface to communicate with the world.

Sendbird already provides some services around that core functionality such as moderation and text search. John Kim, Sendbird’s CEO and founder, said that additional developments like moderation has seen a huge take-up, and services it plans to add into the mix include payments and logistics features, and that it is looking at adding in group audio conversations for customers to build their own Clubhouse clones.

“We are getting enquiries,” said Kim. “We will be setting it up in a personalized way. Voice chat has certainly picked up due to Clubhouse.”

The funding — oversubscribed, the company says — is being led by Steadfast Financial, with Softbank’s Vision Fund 2 also participating, along with previous backers ICONIQ Capital, Tiger Global Management, and Meritech Capital. It comes about two years after Sendbird closed its Series B at $102 million, and the startup appears to have nearly doubled its valuation since then: PitchBook estimates it was around $550 million in 2019.

That growth, in a sense, is not a surprise, given not just the climate right now for virtual interaction, but the fact that Sendbird itself has tripled the number of customers using its tools since 2019. The company, co-headquartered in Seoul, Korea and San Mateo, has now raised around $221 million.

The market that Sendbird has been pecking away at since being founded in 2013 is a hefty one.

Messaging apps have become a major digital force, with a small handful of companies overtaking (and taking on) the primary features found on the most basic of phones and finding traction with people by making them easier to use and full of more interesting features to use alongside the basic functionality. That in turn has led a wave of other companies to build in their own communications features, a way both to provide more community for their users, and to keep people on their own platforms in the process.

“It’s an arms race going on between messaging and payment apps,” Sid Suri, Sendbird’s marketing head, said to me in describing the competitive landscape. “There is a high degree of urgency among all businesses to say we don’t have to lose users to any of them. White label services like ours are powering the ability to keep up.”

Sendbird is indeed one of a wave of companies that have identified both that trend and the opportunity of building that functionality out as a commodity of sorts that can be embedded anywhere a developer chooses to place it by way of an API. It’s not the only one: others in the same space include publicly-listed Twilio, the similarly-named competitor MessageBird (which is also highly capitalised and has positioned itself as a consolidator in the space), PubNub, Sinch, Stream, Firebase and many more.

That competition is one reason why Sendbird has raised money. It gives it more capital to bring on more users, and critically to invest in building out more functionality alongside its core features, to address the needs of its existing users, and to discover new opportunities to provide them with features they perhaps didn’t know they needed in their messaging channels to keep users’ attention.

“We are doing a lot around transactions and payments, as well as logistics,” Kim said in an interview. “We are really building out the end to end experience [since that] really ties into engagement. A couple of new features will be heavily around transactions, and others will be around more engagement.”

Karan Mehandru, a partner at Steadfast, is joining the board with this round, and he believes that there remains a huge opportunity especially when you consider the many verticals that have yet to adopt solid and useful communications channels within their services, such as healthcare.

“The channel that Sendbird is leveraging is the next channel we have come to expect from all brands,” he said in an interview. “Sendbird may look the same as others but if you peel the onion, providing a scalable chat experience that is highly customized is a real problem to solve. Large customers think this is critical but not a core competence and then zoom back to Sendbird because they can’t do it. Sendbird is a clear leader. Sendbird is permeating many verticals and types of companies now. This is one of those rare companies that has been at the right place at the right time.”

Qualcomm’s new chipset for wireless earbuds promises improved noise cancellation and all-day battery life

There are now so many wireless earbuds, it’s hard to keep track, but one of the reasons why we’ve seen this explosion in new and existing manufacturers entering this business is the availability of Bluetooth Audio SoCs from Qualcomm, including the QCC5100 and QCC30xx series. Today, the company is launching the latest chipset in its wireless portfolio, the QCC305x.

Unsurprisingly, it’s a more powerful chip, with four more powerful cores compared to the three cores of its 304x predecessor. But the real promise here is that this additional processing power will now enable earbud makers to offer features like adaptive active noise cancellation and support for using wake words to active Alexa or the Google Assistant.

The new chipset now also supports Qualcomm’s aptX Adaptive with an audio resolution of up to 96kHz and aptX Voice for 3-microphone echo canceling and noise suppression for clearer calls while you are on the go (or on a Zoom call, which is more likely these days). And despite the increased power, Qualcomm promises all-day battery life, too, though, at the end of the day, it’s up to the individual manufacturer to tune their gadgets accordingly.

Image Credits: Qualcomm

The new chipset has also been designed to support the upcoming Bluetooth LE Audio standard. This new standard hasn’t been finalized just yet, but it promises features like multi-stream for multiple synchronized audio streams from a single device — useful for wireless earbuds — and support for personal audio sharing, so that you can share your music from your smartphones with our people around you. There’s also location-based sharing to allow public venues like airports and gyms to share Bluetooth audio with their visitors.

It’s still early days for Bluetooth LE Audio, but during a press conference ahead of today’s announcement, Qualcomm continuously stressed that its new chips will be ready for it once the standard is ratified.

“Not only do our QCC305x SoCs bring many of our latest-and-greatest audio features to our mid-range truly wireless earbud portfolio, they are also designed to be developer-ready for the upcoming Bluetooth LE Audio standard,” James Chapman, vice president, and general manager, Voice, Music, and Wearables at Qualcomm, said in the announcement. “We believe this combination gives our customers great flexibility to innovate at a range of price points and helps them meet the needs of today’s audio consumers, many of whom now rely on their truly wireless earbuds for all sorts of entertainment and productivity activities.”

Image Credits: Qualcomm

Google Assistant can now control Android apps

Google today announced it’s making it possible to use the voice command “Hey Google” to not just open but also perform specific tasks within Android apps. The feature will be rolled out to all Google Assistant-enabled Android phones, allowing users to launch apps with their voice as well as search inside apps or perform specific tasks — like ordering food, playing music, posting to social media, hailing a ride and more.

For example, users could say something like, “Hey Google, search cozy blankets on Etsy,” “open Selena Gomez on Snapchat,” “start my run with Nike Run Club,” or “check news on Twitter .”

At launch, Google says these sorts of voice commands will work with more than 30 of the top apps on Google Play in English globally, with more apps coming soon. Some of the supported apps today include Spotify, Snapchat, Twitter, Walmart, Discord, Etsy, MyFitnessPal, Mint, Nike Adapt, Nike Run Club, eBay, Kroger and Postmates, Wayfair, to name a few.

If the specific voice command you would use to perform a common task is a little clumsy, the feature will also allow you to create a custom shortcut phrase instead. That means, instead of saying “Hey Google, tighten my shoes with Nike Adapt,” you could create a command that just said, “Hey Google, lace it.”

To get started with shortcuts, Android users can say “Hey Google, show my shortcuts” to get to the correct Settings screen.

The feature is similar to Apple’s support for using Siri with iOS apps, which also includes the ability to open apps, perform tasks and record your own custom phrase.

In Google’s case, the ability to perform tasks inside an app is implemented on the developer’s side by mapping users’ intents to specific functionality inside their apps. This feature, known as App Actions, allows users to open their favorite apps with a voice command — and, with the added functionality, lets users say “Hey Google” to search within the app or open specific app pages.

Google says it has grown its catalog to include more than 60 intents across 10 verticals, including Finance, Ridesharing, Food Ordering, Fitness and, now, Social, Games, Travel & Local, Productivity, Shopping and Communications, too.

To help users understand how and when they can use these new App Actions, Google says it’s building touchpoints in Android that will help them learn when they use certain voice commands. For instance, if a user said “Hey Google, show me Taylor Swift,” it may highlight a suggestion chip that will guide the users to opening the search result on Twitter.

Image Credits: Google

Related to this news, Google says it also released two new English voices for developers to leverage when building custom experiences for Assistant on Smart Displays, alongside other developer tools and resources for those building for displays.

The Google Assistant upgrade for apps was one of several Android improvements Google highlighted today. The company also says it’s adding screen-sharing to Google Duo, expanding its Verified Calls anti-spam feature to more devices (Android 9 and up), and updating the Google Play Movies & TV app to become the new “Google TV” app, announced last week.

On the accessibility front, it’s introducing new tools for hearing loss with Sound Notifications and others for communicating using Action blocks, aimed at people with cerebral palsy, Down Syndrome, autism, aphasia and other speech-related disabilities.

The features are available now.

Walmart launches its own voice assistant, ‘Ask Sam,’ initially for employee use

Walmart is expanding its use of voice technology. The company announced today its taking its employee assistance voice technology dubbed “Ask Sam” and making it available to associates at over 5,000 Walmart stores nationwide. The tool allows Walmart employees to look up prices, access store maps, find products, view sales information, check email, and more. In recent months, Ask Sam has also been used to access COVID-19 information, including the latest guidelines, guidance and safety videos.

“Ask Sam” was initially developed for use in Walmart-owned “Sam’s Club” stores, where it rolled out across the U.S. in 2019. Because of its use of voice tech, Ask Sam can speed up the time it takes to get to information versus typing a query on the small screen. This allows employees to better engage with customers instead of spending time on their device looking for information.

In the COVID-19 era, the tool offers another perk — it’s easier to use a voice app when you’re wearing gloves.

In addition to common functions like price lookups and product locators, Ask Sam can also help employees with printing, email, or viewing staff birthdays or other events. An included Emergency Alert feature allows managers to quickly and efficiently alert all employees of emergency situations, whether that’s a lockdown order requiring them to remain in the store or an in-store emergency that requires everyone to leave the stores.

The voice assistance technology was built using machine learning techniques, which means it gets smarter and more accurate over time, as it’s used. In addition, a team manually reviews the questions being asked to help find other patterns and trends the tech may have missed, like top searched-items.

This is not the retailer’s first experiments in use voice technology. In addition to the Ask Sam product’s earlier launch within Sam’s Club stores, Walmart itself also partnered with Google last year on voice-ordering across Google Assistant-powered platforms, in a bid to counter Amazon’s advances with Alexa in the home. And three years ago, Walmart had worked with Google on voice-based shopping on Google Home devices before Google Express shut down.

Walmart has not said whether it would create a version of Ask Sam technology that would aim to serve retail customers. But given that the product is now capable of answering questions that customers want to know too — like where to find an item or how much it costs — it makes sense that the retailer would expand the offering in the future.

 

Where is voice tech going?

2020 has been all but normal. For businesses and brands. For innovation. For people.

The trajectory of business growth strategies, travel plans and lives have been drastically altered due to the COVID-19 pandemic, a global economic downturn with supply chain and market issues, and a fight for equality in the Black Lives Matter movement — amongst all that complicated lives and businesses already.

One of the biggest stories in emerging technology is the growth of different types of voice assistants:

  • Niche assistants such as Aider that provide back-office support.
  • Branded in-house assistants such as those offered by BBC and Snapchat.
  • White-label solutions such as Houndify that provide lots of capabilities and configurable tool sets.

With so many assistants proliferating globally, voice will become a commodity like a website or an app. And that’s not a bad thing — at least in the name of progress. It will soon (read: over the next couple years) become table stakes for a business to have voice as an interaction channel for a lovable experience that users expect. Consider that feeling you get when you realize a business doesn’t have a website: It makes you question its validity and reputation for quality. Voice isn’t quite there yet, but it’s moving in that direction.

Voice assistant adoption and usage are still on the rise

Adoption of any new technology is key. A key inhibitor of technology is often distribution, but this has not been the case with voice. Apple, Google, and Baidu have reported hundreds of millions of devices using voice, and Amazon has 200 million users. Amazon has a slightly more difficult job since they’re not in the smartphone market, which allows for greater voice assistant distribution for Apple and Google.

Image Credits: Mark Persaud

But are people using devices? Google said recently there are 500 million monthly active users of Google Assistant. Not far behind are active Apple users with 375 million. Large numbers of people are using voice assistants, not just owning them. That’s a sign of technology gaining momentum — the technology is at a price point and within digital and personal ecosystems that make it right for user adoption. The pandemic has only exacerbated the use as Edison reported between March and April — a peak time for sheltering in place across the U.S.

Amazon revamps its Alexa app to focus on first-party features, more personalization

After launching a number of new developer tools for Alexa last week, Amazon today introducing an updated version of its Alexa mobile app for consumers. The new app aims to offer a more personalized experience, particularly on users’ home screens, and offers more instructions on how and when consumers can use the digital assistant, among other changes. Notably, the app has moved its third-party skill suggestions off the main screen, to increase focus on how consumers are actually using Alexa.

The redesign offers an updated home screen, with a big Alexa button now at the top informing users they can either tap or say “Alexa” to get started.

This is followed by a list of personalized suggestions based on what consumers’ usage of the app indicates is important to them —  whether that’s reminders, a recently played item like their music or an Audible book, access to their shopping list, and so on.

Image Credits: Amazon

Users may also see controls for features that are frequently accessed or currently active, like the volume level for their Echo devices, so they can pick up where they left off, Amazon says. It’s worth noting that these Echo devices could include Echo Buds, Amazon’s Alexa-powered wireless earbuds, which could be key to its plans of enabling Alexa’s newly announced capabilities for controlling mobile apps.

For first-time users, the Alexa app will offer more tips on what to do on mobile. For instance, new users may see suggestions about playing songs with Amazon Music or prompts to manage their Alexa Shopping List.

Meanwhile, the app’s advanced features — like Reminders, Routines, Skills and Settings — have been relocated under the “More” button as part of the redesign.

The changes don’t necessarily mean Amazon has decluttered the Alexa home screen, however.

Because the update moved the Alexa button to the top of the screen, it has left room in the navigation bar for a new button: “Play,” which encourages media playback.

The revamp also suggests that Alexa’s dedicated app hasn’t exactly found its sweet spot to become part of users’ daily lives.

Before, the app had featured the date and weather at the top of the screen — an indication that Amazon had hoped the app would be something of a daily dashboard. (See below). Now, the company seems to understand that users will launch the app when they want to do something Alexa-specific. That’s why it’s making it easier to get to recent actions, so they can effectively pick up where they left off on whatever they were doing on their Echo smart speaker, for example.

Image Credits: Current Alexa app, screenshot via TechCrunch

In addition, the new app notably deprioritizes Alexa’s third-party voice apps (aka “skills”), which have not yet evolved into an app ecosystem to rival its mobile counterparts, like the Apple App Store for iOS apps or Google Play. Studies have indicated a large number of Alexa skills weren’t being used, and as a result, the pace of new skills releases has slowed.

Instead of showcasing popular skills on the home screen, as before, the app’s “Skills & Games” section has been shuffled off to the “More” tab. Amazon’s first-party experiences, like shopping, media playback and communications, now take up this crucial home screen real estate.

Amazon says the new app is rolling out worldwide over the month ahead on iOS, Android, and Fire OS devices. By late August, all users should be migrated to the new experience.

Amazon adds ‘hands-free’ Alexa to its Alexa mobile app

Amazon is making it easier for mobile users to access its Alexa virtual assistant while on the go. The company announced today it’s making it possible to use Alexa “hands free” from within its Alexa mobile app for iOS and Android, meaning customers will be able to use Alexa to make lists, play music, control their smart home devices and more, without having to touch their phone.

Customers can first command their phone’s digital assistant, like Siri or Google Assistant, to launch the Alexa app to get started with the hands-free experience. They can then speak to Alexa as they would normally, saying something like “Alexa, set the thermostat to 72,” “Alexa, remind me to call Jen at 12 pm tomorrow,” “Alexa, what’s the weather?” and so on. Customers can even request to stream music directly within the Alexa app itself, if they choose.

Before, users would have to tap the blue Alexa button at the bottom of the screen before Alexa would listen.

Once the wake word is detected, an animated blue line will appear at the bottom of the app’s screen to indicate Alexa is streaming the request to the cloud.

Amazon had previously integrated the Alexa experience into its other apps, including its flagship shopping app and Amazon Music. In the latter, it rolled out a hands-free Alexa option back in 2018, allowing users to control playback or ask for music without having to tap. But the Alexa app has remained a tap-to-talk experience until now, which doesn’t quite mesh with how Alexa works on most other devices, like Amazon Echo speakers and screens, for example.

After updating the app, customers will be presented with the option to enable the hands-free detection and can then begin to use the feature. A setting is also being made available that will allow users to turn the feature off at any time.

Amazon notes the feature will only work when the phone is unlocked and the Alexa app is open on the screen. It won’t be able able to launch Alexa from a locked phone or when the app is closed and off the screen, running in the background. (As we don’t have the app update yet ourselves, we are unable to directly confirm this detail.)

To use the new feature, customers will have to first update their Alexa app to the latest version on the Apple App Store or Google Play store.

Amazon says the feature is rolling out over the next several days to users worldwide, so you may not see the option immediately.

Pandora launches interactive voice ads

Pandora has begun to test a new type of advertising format that allows listeners to respond to the ad by speaking aloud. In the new ads, listeners are prompted to say “yes” after the ad asks a question and a tone plays. The ads will then offer more information about the product or brand in question.

Debut advertisers testing the new format include Doritos, Ashley HomeStores, Unilever, Wendy’s, Turner Broadcasting, Comcast, and Nestle.

The ads begin by explaining what they are and how they’ll work. They then play a short and simple message followed by a question that listeners are supposed to respond to.

For example, the Wendy’s ad asks listeners if they’re hungry, and if they say “yes” the ad continues by offering a recommendation about what to eat. The DiGiorno’s pizza ad asks listeners to say “yes” to hear the punchline of a pizza-themed joke. The Ashely HomeStores ad engages listeners by offering tips on getting a better night’s sleep. And so on.

The new format capitalizes on Pandora’s underlying voice technology which also powers the app’s smart voice assistant, Voice Mode, launched earlier this year. While Voice Mode lets Pandora users control their music hands-free, the voice ads aim to get users to engage with the advertiser’s content hands-free, as opposed to tapping the on the screen or visiting a link to get more information.

The company believes these types of ads will be more meaningful as they force listeners to pay attention. For the brand advertisers, voice ads offer a way to more directly measure how many people an ad reached — something that’s not possible with traditional audio ads, which by their nature aren’t clickable.

Pandora announced its plans to test interactive voice ads back in April of this year, initially with San Francisco-based adtech company, Instreamatic. At the time, it said it would launch the new format into beta testing by Q4, as it now has.

The ad format arrives at a time when consumers have become more comfortable talking to digital voice assistants, like Siri, Alexa, and Google Assistant. There’s also an increased expectation that services we interact with will support voice commands — like when we’re speaking to Fire TV or Apple TV to find something to watch or asking Pandora or Spotify to play our favorite music.

But consumers’ appetite for interactive voice advertisements is still largely untested. Even Amazon limited voice ads on its Alexa platform for fear of alienating users who would find them disruptive to the core experience.

In Pandora’s case, however, users don’t have to play along. The company says if the user doesn’t respond within a couple of seconds or if they say no, the music resumes playback.

Pandora says the ads will begin running for a small subset of listeners using its app starting today.

 

Alexa developers can now personalize their skills by recognizing the user’s voice

Amazon Alexa is already capable of identifying different voices to give personalized responses to different users in the household, thanks to the added support for voice profiles two years ago. Now, those same personalization capabilities will be offered to Alexa Skill developers, Amazon has announced.

Alongside Amazon’s big rollout of new consumer devices on Wednesday, the company also introduced a new “skill personalization” feature for the Alexa Skills Kit, that lets developers tap into the voice profiles that customers create through the Alexa companion app or from their device.

This expanded capability lets developers make skills that are able to remember a user’s custom settings, address their preferences when using the skill, and just generally recognize the different household members who are speaking at the time, among other things.

To work, Alexa will send a directed identifier — a generated string of characters and numbers — to the skill in question, if the customer has a voice profile set up. Every time the customer returns to that skill, the same identifier is share. This identifier doesn’t include any personally identifiable information, Amazon says, and is different for each voice profile for each skill the customer users.

Skill developers can then leverage this information to generate personalized greetings or responses based on the customers’ likes, dislikes, and interests.

If the customer doesn’t want to use skill personalization even though they configured a voice profile, they can opt out of the feature in the Alexa app.

Personalization could be a particular advantage to Alexa skills like games, where users may want to save their progress, or to music or podcasts/audio programming skills, where taste preferences come into play.

However, Alexa’s process for establishing voice profiles still requires manual input on users’ parts — people have to configure the option in the Alexa companion app’s settings, or say to Alexa, “learn my voice.” Many consumers may not know it’s even an option — which means developers interested in the feature may have to educate users by way of informational tips in their own apps, at first.

The feature is launching into preview, which means Amazon is just now opening up the ability to select developers. Those interested in putting the option to use will have to apply for access and wait to hear back.