Getting tech right in Iowa and elsewhere requires insight into data, human behavior

What happened in Iowa’s Democratic caucus last week is a textbook example of how applying technological approaches to public sector work can go badly wrong just when we need it to go right.

While it’s possible to conclude that Iowa teaches us that we shouldn’t let tech anywhere near a governmental process, this is the wrong conclusion to reach, and mixes the complexity of what happened and didn’t happen. Technology won’t fix a broken policy and the key is understanding what it is good for.

What does it look like to get technology right in solving public problems? There are three core principles that can help more effectively build public-interest technology: solve an actual problem, design with and for users and their lives in mind and start small (test, improve, test).

Before developing an app or throwing a new technology into the mix in a political process it is worth asking: what is the goal of this app, and what will an app do that will improve on the existing process?

Getting it right starts with understanding the humans who will use what you build to solve an actual problem. What do they actually need? In the case of Iowa, this would have meant asking seasoned local organizers about what would help them during the vote count. It also means talking directly to precinct captains and caucus goers and observing the unique process in which neighbors convince neighbors to move to a different corner of a school gymnasium when their candidate hasn’t been successful. In addition to asking about the idea of a web application, it is critical to test the application with real users under real conditions to see how it works and make improvements.

In building such a critical game-day app, you need to test it under more real-world conditions, which means adoption and ease of use matters. While Shadow (the company charged with this build) did a lightweight test with some users, there wasn’t the runway to adapt or learn from those for whom the app was designed. The app may have worked fine, but that doesn’t matter if people didn’t use it or couldn’t download it.

One model of how this works can be found in the Nurse Family Partnership, a high-impact nonprofit that helps first-time, low-income moms.

This nonprofit has adapted to have feedback loops from its moms and nurses via email and text messages. It even has a full-time role “responsible for supporting the organization’s vision to scale plan by listening and learning from primary, secondary and internal customers to assess what can be done to offer an exceptional Nurse-Family Partnership experience.”

Building on its program of in-person assistance, the Nurse Family Partnership co-designed an app (with Hopelab, a social innovation lab in collaboration with behavioral-science based software company Ayogo). The Goal Mama app builds upon the relationship between nurses and moms. It was developed with these clients in mind after research showed the majority of moms in the program were using their smartphones extensively, so this would help meet moms where they were. Through this approach of using technology and data to address the needs of their workforce and clients, they have served 309,787 moms across 633 counties and 41 states.

Another example is the work of Built for Zero, a national effort focused on the ambitious goal of ending homelessness across 80 cities and counties. Community organizers start with the personal challenges of the unhoused — they know that without understanding the person and their needs, they won’t be able to build successful interventions that get them housed. Their work combines a methodology of human-centered organizing with smart data science to deliver constant assessment and improvements in their work, and they have a collaboration with the Tableau foundation to build and train communities to collect data with new standards and monitor progress toward a goal of zero homelessness.

Good tech always starts small, tests, learns and improves with real users. Parties, governments and nonprofits should expand on the learning methods that are common to tech startups and espoused by Eric Reis in The Lean Startup. By starting with small tests and learning quickly, public-interest technology acknowledges the high stakes of building technology to improve democracy: real people’s lives are at stake. With questions about equity, justice, legitimacy and integrity on the line, starting small helps ensure enough runway to make important changes and work out the kinks.

Take for example the work of Alia. Launched by the National Domestic Workers Alliance (NDWA), it’s the first benefits portal for house cleaners. Domestic workers do not typically receive employee benefits, making things like taking a sick day or visiting a doctor impossible without losing pay.

Its easy-to-use interface enables people who hire house cleaners to contribute directly to their benefits, allowing workers to receive paid time off, accident insurance and life insurance. Alia’s engineers benefited from deep user insights gained by connecting to a network of house cleaners. In the increasing gig economy, the Alia model may be instructive for a range of employees across local, state and federal levels. Obama organizers in 2008 dramatically increased volunteerism (up to 18%) just by A/B testing the words and colors used for the call-to-action on their website.

There are many instructive public interest technologies that focus on designing not just for the user. This includes work in civil society such as Center for Civic Design, ensuring people can have easy and seamless interactions with government, and The Principles for Digital Development, the first of which is “design with the user.” There is also work being done inside governments, from the Government Digital Service in the U.K. to the work of the United States Digital Service, which was launched in the Obama administration.

Finally, it also helps to deeply understand the conditions in which technology will be used. What are the lived experiences of the people who will be using the tool? Did the designers dig in and attend a caucus to see how paper has captured the moving of bodies and changing of minds in gyms, cafes and VFW halls?

In the case of Iowa, it requires understanding the caucuses norms, rules and culture. A political caucus is a unique situation.

Not to mention, this year the Iowa Caucus deployed several process changes to increase transparency but also complexify the process, which needed to also be taken into account when deploying a tech solution. Understanding the conditions in which technology is deployed requires a nuanced understanding of policies and behavior and how policy changes can impact design choices.

Building a technical solution without doing the user-research to see what people really need runs the risk of reducing credibility and further eroding trust. Building the technology itself is often the simple part. The complex part is relational. It requires investing in capacity to engage, train, test and iterate.

We are accustomed to same-day delivery and instantaneous streaming in our private and social lives, which raises our expectations for what we want from the public sector. The push to modernize and streamline is what leads to believing an app is the solution. But building the next killer app for our democracy requires more than just prototyping a splashy tool.

Public-interest technology means working toward the broader, difficult challenge of rebuilding trust in our democracy. Every time we deploy tech for the means of modernizing a process, we need to remember this end goal and make sure we’re getting it right.

Iowa’s caucus app was a disaster waiting to happen

A smartphone app designed to help announce the results of the Iowa caucus ended up crapping out and causing a massive delay by almost an entire day.

The Iowa caucus traditionally uses gatherings of people in counties across the state to determine which candidates they want to back for the presidential nomination. They use a paper trail as a way of auditing the results. While Iowa may have only 41 delegates needed out of 1,990 to nominate a Democratic candidate, the results are nevertheless seen as a nationwide barometer for who might be named to the ticket.

In an effort to modernize and speed up the process, the Iowa Democrats commissioned an app.

But the app, built by a company called Shadow Inc., failed spectacularly. Some districts had to call in their results instead.

Iowa Democrats spokesperson Mandy McClure described the app’s failure as a “reporting issue” rather than a security matter or a breach. McClure later said it was a “coding issue.” The results had been expected to land late on Monday but have now been delayed until Tuesday afternoon, according to the Iowa Democrats.

Who could have seen it coming? Actually, quite a few people.

“There was no need whatsoever for an app,” said Zeynep Tufekci, an associate professor at the University of North Carolina in a tweet.

Little is known about the app, which has been shrouded in secrecy even after it was profiled by NPR in January. The app was the first-of-its-kind to be used in a U.S. presidential nomination process, despite concerns that use of electronics or apps might open up the process to hackers.

What is known is that details of its security were kept secret amid fears that it could be used by hackers to exploit the system. That’s been criticized by security experts who say “security through obscurity” is a fallacy. Homeland Security secretary Chad Wolf said on television Tuesday that the Iowa Democrats declined an offer from the agency to test the app for security flaws. And because of the secrecy, there’s no evidence to show that the app went through extensive testing — or if it did, what levels of testing and scrutiny it went through.

Some say the writing was on the wall.

“Honestly, there is no need to attribute conspiracy or call shenanigans on what happened with the new app during the Iowa caucuses,” Dan McFall, chief executive at app testing company Mobile Labs, told me in an email. “It’s a tale that we have seen with our enterprise customers for years: A new application was pushed hard to a specific high profile deadline. Mobility is much harder than people realize, so initial release was likely delayed, and to make the deadline, they cut the process of comprehensive testing and then chaos ensues.”

Others agreed. Doron Reuveni, who heads up software testing firm Applause, said the app should have gone through extensive testing and real-world testing to see the “blind spots” that the app’s own developers may not see. And Simone Petrella, chief executive of cybersecurity firm CyberVista and former analyst at the Department of Defense, said there was no need for a sophisticated solution to a simple problem.

“A Google Sheet or another shared document could suffice,” she said. “It is incredibly difficult — and costly — to build and deliver solutions that are designed to ensure security and still are intuitive to an end user,” said Petrella. “If you’re going to build a solution or application to solve for this type of issue, then you’re going to have to make sure it’s designed with security in mind from the start and do rigorous product testing and validation throughout the development process to ensure everything is captured and data is being directed properly and securely.”

The high-profile failure is likely to send alarm bells to other districts and states with similar plans in place ahead of their respective caucuses before the Democratic National Convention in July, where the party will choose their candidate for president.

Nevada was said to be using the app next for its upcoming caucus in February, but that plan has been nixed.

“We will not be employing the same app or vendor used in the Iowa caucus,” the spokesperson said. “We had already developed a series of backups and redundant reporting systems and are currently evaluating the best path forward.”

In a tweet, Shadow Inc. expressed “regret” about the problems with the Iowa caucus, and that it “will apply the lessons learned in the future.”

Why an app was used for such an important issue is a question that many will be asking themselves today. At least on the bright side, Iowa is now a blueprint of how not to use tech in elections.

An app tasked with reporting the results of the Iowa caucus has crashed

A smartphone app tasked with reporting the results of the Iowa caucus has crashed, delaying the result of the first major count in nominating a Democratic candidate to run for the U.S. presidency.

The result of the Iowa caucus was due to be transmitted by smartphone apps from delegates across the state on Monday, but a “quality control” issue was detected shortly before the result was expected.

“We found inconsistencies in the reporting of three sets of results,” said Mandy McClure, a spokesperson for the Iowa Democrats.

“In addition to the tech systems being used to tabulate results, we are also using photos of results and a paper trail to value that all results match and ensure that we have confidence and accuracy in the numbers we report,” said McClure, saying this was “not a hack or an intrusion.”

“The underlying data and paper trail is sound and will simply take time to further report the results,” she said.

Some reports say that the result may not be called before Tuesday.

A report by NPR in January said the smartphone app was designed to save time in reporting the results, but bucked the trend in the use of smartphones in the voting process during a time where there are concerns that voting machines and other election infrastructure are feared vulnerable to hackers. Security concerns were raised about the app, whose developer has not yet been named nor its security practices, fearing that doing so would help hackers break into the system.

But the app was reportedly described as buggy and problematic by officials hours before the final results were due to be called.

Screenshots in tweets seen by TechCrunch, but have since been deleted, showed problems with the app as early as 6 pm local time.

One of the precinct chairs in Shelby County said they would call in her results instead.

Iowa is an important first round of votes to nominate a Democratic candidate for the presidency. The final candidate will be chosen later this year to run against presumed Republican candidate President Donald Trump.

Plenty of Fish app was leaking users’ hidden names and postal codes

Dating app Plenty of Fish has pushed out a fix for its app after a security researcher found it was leaking information that users had set to “private” on their profiles.

The app was always silently returning users’ first names and postal ZIP codes, according to The App Analyst, a mobile expert who writes about his analyses of popular apps on his eponymous blog.

The leaking data was not immediately visible to app users, and the data was scrambled to make it difficult to read. But using freely available tools designed to analyze network traffic, the researcher found it was possible to reveal the information about users as their profiles appeared on his phone.

In one case, the App Analyst found enough information to identify where a particular user lived, he told TechCrunch.

Plenty of Fish has more than 150 million registered users, according to its parent company IAC. In recent years, law enforcement has warned about the threats some people face on dating apps, like Plenty of Fish. Reports suggest sex attacks involving dating apps have risen in the past five years. And those in the LGBTQ+ community on these apps also face safety threats from both individuals and governments, prompting apps like Tinder to proactively warn their LGBTQ+ users when they visit regions and states with restrictive and oppressive laws against same-sex partners.

A fix is said to have rolled out earlier this month for the information leakage bug. A spokesperson for Plenty of Fish did not immediately comment.

Earlier this year, the App Analyst found a number of third-party tools were allowing app developers to record the device’s screen while users engaged with their apps, prompting a crackdown by Apple.

Blindlee is Chatroulette for dating with a safety screen

Make space for another dating app in your single life: Blindlee is Chatroulette for dating but with female-friendly guardrails in the form of a user-controlled video blur effect.

The idea is pretty simple: Singles are matched randomly with another user who meets some basic criteria (age, location) for a three minute ‘ice breaker’ video call. The app suggests chat topics — like ‘pineapple on pizza, yay or nay’ — to get the conversation flowing. After this, each caller chooses whether or not to match — and if both match they can continue to chat via text.

The twist is that the video call is the ‘first contact’ medium for determining whether it’s a match or not. The call also starts “100% blurred” — for obvious, ‘dick pic’ avoidance reasons.

Blindlee says female users have control of the level of blur during the call — meaning they can elect to reduce it to 75%, 50%, 25% or nothing if they like what they’re (partially) seeing and hearing. Though their interlocutor also has to agree to the reduction so neither side can unilaterally rip the screen away.

Dating apps continue to be a bright spot for experimental ideas, despite category giants like Tinder dominating with a much cloned swipe-to-match formula. Tech giant Facebook also now has its own designs on the space. But turns out there’s no fixed formula for finding love or chemistry.

All the data in the world can’t necessarily help with that problem. So a tiny, bootstrapping startup like Blindlee could absolutely hit on something inspired that Tinder or Facebook hasn’t thought of (or else feels it can’t implement across a larger user-base).

Co-founder Sacha Nasan also reckons there’s space for supplementary dating apps.

“We’re focusing on blind dating which is a subset of dating so you can say that indirectly rather than directly we are competing with the big dating apps (Tinder etc). This is more niche and is definitely a new, untried concept to the dating world,” he argues. “However the good thing about dating apps is that they are not substitutes but complements.

“Just like people may have installed Uber on their phone but also Hailo and Lyft, people have multiple datings app installed as well (to maximise their chances of finding a partner) and that is an advantage. Nonetheless we still think that we only indirectly compete with other dating apps.”

Using a blur effect to preserve privacy is not in itself entirely a new idea. For example Muzmatch, a YC-backed dating app focused on matchmaking Muslims, offers a blur feature to users not wanting to put their profile photos out there for any other user to see.

But Blindlee is targeting a more general dating demographic. Though Nasan says it does plan to expand matching filters, if/when it can grow its user-base, to include additional criteria such as religion.

“The target is anyone above 18 (for legal reasons) and from the data we see most users are under 30,” he says. “So this covers university students to young professionals. On the spectrum of dating apps where ‘left’ would be hookups apps (like Tinder used to be) and ‘right’ would be relationship app (like Hinge), we position ourself more on the right side (a relationship app).”

Blindlee is also using video as the chemistry-channeling medium to help users decide if they match or not.

This is clever because it’s still a major challenge to know if you’ll click with an Internet stranger in real life with only a digitally mediated version of the person to go on. At least live on camera there’s only so much faking that can be done — well, unless the person is a professional actor or scammer.

And while plunging into a full-bore videochat with a random might sound a bit much, a blurry teaser with conversation prompts looks fairly low risk. The target user for Blindlee is also likely to have grown up online and with smartphones and Internet video culture. A videocall should therefore be a pretty comfortable medium of expression for these singles.

“The idea came from my experience in the app world (since the age of 14) combined with a situation where my cousin… went on a date from one of the dating apps where the man who showed up was about 15 years older. The man had used old pictures on his profile,” explains Nasan. “That’s just one story and there are plenty like these so I grew tired of the sometimes fake and superficial aspect of the online dating world. Together with my cousin’s brother [co-founder, Glenn Keller] we decided to develop Blindlee to make the process more transparent and safer but also fun.

“Blindee makes for a fun three-minute blurred video experience with a random person matching your criteria. It’s kind of like a short, pre-date ice-breaker before you potentially match and decide to meet in real life. And we put control of the blur filter in the woman’s hand to make it safer for women (but also because if the men would have control they would straight away ask to unblur it — and we have tested this!).”

The app is a free download for now but the plan is to move to a freemium model with a limit on the number of free video chats per day — charging a monthly subscription to unlock more than three daily calls.

“This will be priced cheap around £3-4/month compared to usual dating premium subscription which cost £10+ a month,” he says. “We basically look at this income as a way of paying the server bills (as every minute of video costs us).”

The London-based startup was founded in March and launched the app in October on iOS, adding an Android version earlier this month. Nasan says they’ve picked up around 5,000 registered users so far with only minimal marketing — such as dropping flyers on London university campuses.

While they’re bootstrapping the launch he says they may look to take in angel funding “as we see growth picking up”.

Google enlists mobile security firms to help rid Google Play of bad Android apps

Google has partnered with mobile security firms ESET, Lookout and Zimperium to combat the scourge of malicious Android apps that have snuck into the Google Play app store.

The announcement came Wednesday, with each company confirming their part in the newly created App Defense Alliance. Google said it’s working with the companies to “stop bad apps before they reach users’ devices.”

The search giant has struggled to fight against malicious apps in recent years. Although apps are screened for malware and other malicious components before apps are allowed into Google Play, the search and mobile giant has been accused of not doing enough to weed out malicious apps before they make it to users’ devices.

Google said earlier this year that just 0.04% of all Android apps downloaded from Google Play were considered potentially harmful apps — or about 30 million potentially malicious apps.

Yet, it remains an ongoing problem.

ESET, Lookout, and Zimperium have all contributed to the discovery — and eventual takedown — of hundreds of malicious apps on Google Play in recent years.

But each time Google takes down a suspicious or malicious app from Google Play, the thousands or millions of users with the app installed on their phone remain vulnerable. The apps are not removed from devices, continuing to put users at risk.

By integrating its Google Play Protect technology, which serves as Android’s built-in antimalware engine, with each of its partners’ scanning engines, the collective effort will help to better screen apps before they are approved for users to download.

Google said that knowledge sharing and industry collaboration are “important” to combat rising mobile app threats.

TD Ameritrade is bringing customers’ financial portfolios into the car

TD Ameritrade has integrated with in-vehicle software platforms Apple CarPlay, Android Auto and Amazon’s Echo Auto to give customers the ability to check their stock portfolio or get the latest financial news while sitting behind the wheel.

TD Ameritrade launched the suite of in-vehicle experiences this week, the latest move by the company to place investors just a voice command or click away from a stock price or other financial information.

TD Ameritrade Chief Information Officer Vijay Sankaran called this a “natural next step” and another way the company is “using complex technology to weave investing seamlessly into our daily lives.”

For now, customers won’t be able to make trades within the vehicle. Although that might be another “natural next step,” considering the trajectory of TD Ameritrade. Customers already can trade over the phone, via a desktop computer or mobile app and, more recently, through Amazon Alexa-enabled devices.

Instead, the features will depend on which in-vehicle software platform a customer is using. For those with Apple CarPlay, customers can keep track of real-time market news with a new TDAN Radio app from the TD Ameritrade Network. The network will broadcast news live via audio streaming optimized for CarPlay.

Drivers using the Android Auto and Echo Auto platforms have the option to use voice commands to unlock market performance summaries and sector updates, hear real-time quotes and check account balances and portfolio performance.

“In a connected world like ours, we have to meet investors where they are, whether at home, in the office, or on the go,” Sunayna Tuteja, head of strategic partnerships and emerging technologies at TD Ameritrade said in a statement. “In-vehicle technology offers a new type of connectivity that further breaks down barriers to accessing financial education and markets.”

Facebook collected device data on 187,000 users using banned snooping app

Facebook obtained personal and sensitive device data on about 187,000 users of its now-defunct Research app, which Apple banned earlier this year after the app violated its rules.

The social media giant said in a letter to Sen. Richard Blumenthal’s office — which TechCrunch obtained — that it collected data on 31,000 users in the U.S., including 4,300 teenagers. The rest of the collected data came from users in India.

Earlier this year, a TechCrunch investigation found both Facebook and Google were abusing their Apple-issued enterprise developer certificates, designed to only allow employees to run iPhone and iPad apps used only inside the company. The investigation found the companies were building and providing apps for consumers outside Apple’s App Store, in violation of Apple’s rules. The apps paid users in return for collecting data on how participants used their devices and to understand app habits by gaining access to all of the network data in and out of their device.

Apple banned the apps by revoking Facebook’s enterprise developer certificate — and later Google’s enterprise certificate. In doing so, the revocation knocked offline both companies’ fleet of internal iPhone or iPad apps that relied on the same certificates.

But in response to lawmakers’ questions, Apple said it didn’t know how many devices installed Facebook’s rule-violating app.

“We know that the provisioning profile for the Facebook Research app was created on April 19, 2017, but this does not necessarily correlate to the date that Facebook distributed the provisioning profile to end users,” said Timothy Powderly, Apple’s director of federal affairs, in his letter.

Facebook said the app dated back to 2016.

TechCrunch also obtained the letters sent by Apple and Google to lawmakers in early March, but were never made public.

These “research” apps relied on willing participants to download the app from outside the app store and use the Apple-issued developer certificates to install the apps. Then, the apps would install a root network certificate, allowing the app to collect all the data out of the device — like web browsing histories, encrypted messages and mobile app activity — potentially also including data from their friends — for competitive analysis.

A response by Facebook about the number of users involved in Project Atlas (Image: TechCrunch)

In Facebook’s case, the research app — dubbed Project Atlas — was a repackaged version of its Onavo VPN app, which Facebook was forced to remove from Apple’s App Store last year for gathering too much device data.

Just this week, Facebook relaunched its research app as Study, only available on Google Play and for users who have been approved through Facebook’s research partner, Applause. Facebook said it would be more transparent about how it collects user data.

Facebook’s vice president of public policy Kevin Martin defended the company’s use of enterprise certificates, saying it “was a relatively well-known industry practice.” When asked, a Facebook spokesperson didn’t quantify this further. Later, TechCrunch found dozens of apps that used enterprise certificates to evade the app store.

Facebook previously said it “specifically ignores information shared via financial or health apps.” In its letter to lawmakers, Facebook stuck to its guns, saying its data collection was focused on “analytics,” but confirmed “in some isolated circumstances the app received some limited non-targeted content.”

“We did not review all of the data to determine whether it contained health or financial data,” said a Facebook spokesperson. “We have deleted all user-level market insights data that was collected from the Facebook Research app, which would include any health or financial data that may have existed.”

But Facebook didn’t say what kind of data, only that the app didn’t decrypt “the vast majority” of data sent by a device.

Facebook describing the type of data it collected — including “limited, non-targeted content” (Image: TechCrunch)

Google’s letter, penned by public policy vice president Karan Bhatia, did not provide a number of devices or users, saying only that its app was a “small scale” program. When reached, a Google spokesperson did not comment by our deadline.

Google also said it found “no other apps that were distributed to consumer end users,” but confirmed several other apps used by the company’s partners and contractors, which no longer rely on enterprise certificates.

Google explaining which of its apps were improperly using Apple-issued enterprise certificates (Image: TechCrunch)

Apple told TechCrunch that both Facebook and Google “are in compliance” with its rules as of the time of publication. At its annual developer conference last week, the company said it now “reserves the right to review and approve or reject any internal use application.”

Facebook’s willingness to collect this data from teenagers — despite constant scrutiny from press and regulators — demonstrates how valuable the company sees market research on its competitors. With its restarted paid research program but with greater transparency, the company continues to leverage its data collection to keep ahead of its rivals.

Facebook and Google came off worse in the enterprise app abuse scandal, but critics said in revoking enterprise certificates Apple retains too much control over what content customers have on their devices.

The Justice Department and the Federal Trade Commission are said to be examining the big four tech giants — Apple, Amazon, Facebook and Google-owner Alphabet — for potentially falling afoul of U.S. antitrust laws.

With antitrust investigations looming, Apple reverses course on bans of parental control apps

With Congressional probes and greater scrutiny from Federal regulators on the horizon, Apple has abruptly reversed course on its bans of parental control apps available in its app store.

As reported by The New York Times, Apple quietly updated its App Store guidelines to reverse its decision to ban certain parental control apps.

The battle between Apple and certain app developers dates back to last year when the iPhone maker first put companies on notice that it would cut their access to the app store if they didn’t make changes to their monitoring technologies.

The heart of the issue is the use of mobile device management (MDM) technologies in the parental control apps that Apple has removed from the App Store, Apple said in a statement earlier this year.

These device management tools give control and access over a device’s user location, app use, email accounts, camera permissions and browsing history to a third party.

“We started exploring this use of MDM by non-enterprise developers back in early 2017 and updated our guidelines based on that work in mid-2017,” the company said.

Apple acknowledged that the technology has legitimate uses in the context of businesses looking to monitor and manage corporate devices to control proprietary data and hardware, but, the company said, it is “a clear violation of App Store policies — for a private, consumer-focused app business to install MDM control over a customer’s device.”

Last month, developers of these parental monitoring tools banded together to offer a solution. In a joint statement issued by app developers including OurPact, Screentime, Kidslox, Qustodio, Boomerang, Safe Lagoon, and FamilyOrbit, the companies said simply, “Apple should release a public API granting developers access to the same functionalities that Apple’s native “Screen Time” uses.”

By providing access to its screen time app, Apple would obviate the need for the kind of controls that developers had put in place to work around Apple’s restrictions.

“The API proposal presented here outlines the functionality required to develop effective screen time management tools. It was developed by a group of leading parental control providers,” the companies said. “It allows developers to create apps that go beyond iOS Screen Time functionality, to address parental concerns about social media use, child privacy, effective content filtering across all browsers and apps and more. This encourages developer innovation and helps Apple to back up their claim that “competition makes everything better and results in the best apps for our customers”.

Now, Apple has changed its guidelines to indicate that apps using MDM “must request the mobile device management capability, and may only be offered by commercial enterprises, such as business organizations, educational institutions, or government agencies, and, in limited cases, companies utilizing MDM for parental controls. MDM apps may not sell, use, or disclose to third parties any data for any purpose, and must commit to this in their privacy policy.”

Essentially it just reverses the company’s policy without granting access to Screen Time as the consortium of companies have suggested.

“It’s been a hellish roller coaster,” said Dustin Dailey, a senior product manager at OurPact, told The New York Times . OurPact had been the top parental control app in the App Store before it was pulled in February. The company estimated that Apple’s move cost it around $3 million, a spokeswoman told the Times.

 

Apple defends its takedown of some apps monitoring screen-time

Apple is defending its removal of certain parental control apps from the app store in a new statement.

The company has come under fire for its removal of certain apps that were pitched as tools giving parents more control over their children’s screen-time, but that Apple said relied on technology that was too invasive for private use.

“We recently removed several parental control apps from the App Store, and we did it for a simple reason: they put users’ privacy and security at risk. It’s important to understand why and how this happened,” the company said in a statement

The heart of the issue is the use of mobile device management technologies in the parental control apps that Apple has removed from the app store, the company said.

These device management tools give  control and access over a device’s user location, app use, email accounts, camera permissions and browsing history to a third party.

“We started exploring this use of MDM by non-enterprise developers back in early 2017 and updated our guidelines based on that work in mid-2017,” the company said.

Apple acknowledged that the technology has legitimate uses in the context of businesses looking to monitor and manage corporate devices to control proprietary data and hardware, but, the company said, it is “a clear violation of App Store policies — for a private, consumer-focused app business to install MDM control over a customer’s device.”

The company said it communicated to app developers that they were in violation of App Store guidelines and gave the company 30 days to submit updates to avoid being booted from the App Store.

Indeed, we first reported that Apple was warning developers about screen-time apps in December.

“Several developers released updates to bring their apps in line with these policies,” Apple said in a statement. “Those that didn’t were removed from the App Store.”