MacBook Pro 16” first impressions: Return of the Mack

In poker, complacency is a quiet killer. It can steal your forward momentum bit by bit, using the warm glow of a winning hand or two to cover the bets you’re not making until it’s too late and you’re out of leverage. 

Over the past few years, Apple’s MacBook game had begun to suffer from a similar malaise. Most of the company’s product lines were booming, including newer entries like the Apple Watch, AirPods and iPad Pro. But as problems with the models started to mount — unreliable keyboards, low RAM ceilings and anemic graphics offerings — the once insurmountable advantage that the MacBook had compared to the rest of the notebook industry started to show signs of dwindling. 

The new 16” MacBook Pro Apple is announcing today is an attempt to rectify most, if not all, of the major complaints of its most loyal, and vocal, users. It’s a machine that offers a massive amount of upsides for what appears to be a handful of easily justifiable tradeoffs. It’s got better graphics, a bigger display for nearly no extra overall size, a bigger battery with longer life claims and yeah, a completely new keyboard.

I’ve only had a day to use the machine so far, but I did all of my research and writing for this first look piece on the machine, carting it around New York City, through the airport and onto a plane where I’m publishing this now. This isn’t a review, but I can take you through some of the new stuff and give you thoughts based on that chunk of time. 

This is a re-think of the larger MacBook Pro in many large ways. This is a brand new model that will completely replace the 15” MacBook Pro in Apple’s lineup, not an additional model. 

Importantly, the team working on this new MacBook started with no design constraints on weight, noise, size or battery. This is not a thinner machine, it is not a smaller machine, it is not a quieter machine. It is, however, better than the current MacBook Pro in all of the ways that actually count.

Let’s run down some of the most important new things. 

Performance and thermals

The 16” MacBook Pro comes configured with either a 2.6GHz 6-core i7 or a 2.3GHz 8-core i9 from Intel . These are the same processors as the 15” MacBook Pro came with. No advancements here is largely a function of Intel’s chip readiness. 

The i7 model of the 16” MacBook Po will run $2,399 for the base model — the same as the old 15” — and it comes with a 512GB SSD drive and 16GB of RAM. 

Both models can be ordered today and will be in stores at the end of the week.

The standard graphics configuration in the i7 is an AMD Radeon Pro 5300M with 4GB of memory and an integrated Intel UHD graphics 630 chip. The system continues to use the dynamic handoff system that trades power for battery life on the fly.  


The i9 model will run $2,699 and comes with a 1TB drive. That’s a nice bump in storage for both models, into the range of very comfortable for most people. It rolls with an AMD Radeon Pro 5500M with 4GB of memory.

You can configure both models with an AMD Radeon Pro 5500M with 8GB of GDDR6 memory. Both models can also now get up to 8TB of SSD storage – which Apple says is the most on a notebook ever – and 64GB of 2666 DDR4 RAM but I’d expect those upgrades to be pricey.

The new power supply delivers an additional 12w of power and there is a new thermal system to compensate for that. The heat pipe that carries air in and out has been redesigned, there are more fan blades on 35% larger fans that move 28% more air compared to the 15” model. 

The fans in the MacBook Pro, when active, put out the same decibel level of sound, but push way more air than before. So, not a reduction in sound, but not an increase either — and the trade is better cooling. Another area where the design process for this MacBook focused on performance gains rather than the obvious sticker copy. 

There’s also a new power brick which is the same physical size as the 15” MacBook Pro’s adapter, but which now supplies 96w up from 87w. The brick is still as chunky as ever and feels a tad heavier, but it’s nice to get some additional power out of it. 

Though I haven’t been able to put the MacBook Pro through any video editing or rendering tests I was able to see live demos of it handling several 8K streams concurrently. With the beefiest internal config Apple says it can usually handle as many as 4, perhaps 5 un-rendered Pro Res streams.

A bigger display, a thicker body

The new MacBook Pro has a larger 16” diagonal Retina display that has a 3072×1920 resolution at 226 ppi. The monitor features the same 500 nit maximum brightness, P3 color gamut and True Tone tech as the current 15”. The bezels of the screen are narrower, which makes it feel even larger when you’re sitting in front of it. This also contributes to the fact that the overall size of the new MacBook Pro is just 2% larger in width and height, with a .7mm increase in thickness. 

The overall increase in screen size far outstrips the increase in overall body size because of those thinner bezels. And this model is still around the same thickness as the 2015 15” MacBook Pro, an extremely popular model among the kinds of people who are the target market for this machine. It also weighs 4.3 lbs, heavier than the 4.02 lb current 15” model.

The display looks great, extremely crisp due to the increase in pixels and even more in your face because of the very thin bezels. This thing feels like it’s all screen in a way that matches the iPad Pro.

This thick boi also features a bigger battery, a full 100Whr, the most allowable under current FAA limits. Apple says this contributes an extra hour of normal operations in its testing regimen in comparison to the current 15” MacBook Pro. I have not been able to effectively test these claims in the time I’ve had with it so far. 

But it is encouraging that Apple has proven willing to make the iPhone 11 Pro and the new MacBook a bit thicker in order to deliver better performance and battery life. Most of these devices are pretty much thin enough. Performance, please.

Speakers and microphone

One other area where the 16” MacBook Pro has made a huge improvement is the speaker and microphone arrays. I’m not sure I ever honestly expected to give a crap about sound coming out of a laptop. Good enough until I put in a pair of headphones accurately describes my expectations for laptop sound over the years. Imagine my surprise when I first heard the sound coming out of this new MacBook and it was, no crap, incredibly good. 

The new array consists of six speakers arranged so that the subwoofers are positioned in pairs, antipodal to one another (back to back). This has the effect of cancelling out a lot of the vibration that normally contributes to that rattle-prone vibrato that has characterized small laptop speakers pretty much forever.

The speaker setup they have here has crisper highs and deeper bass than you’ve likely ever heard from a portable machine. Movies are really lovely to watch with the built-ins, a sentence I have never once felt comfortable writing about a laptop. 

Apple also vents the speakers through their own chambers, rather than letting sound float out through the keyboard holes. This keeps the sound nice and crisp, with a soundstage that’s wide enough to give the impression of a center channel for voice. One byproduct of this though is that blocking one or another speaker with your hand is definitely more noticeable than before.

The quality of sound here is really very, very good. The HomePod team’s work on sound fields apparently keeps paying dividends. 

That’s not the only audio bit that’s better now though, Apple has also put in a 3-mic array for sound recording that it claims has a high enough signal-to-noise ratio that it can rival standalone microphones. I did some testing here comparing it to the iPhone’s mic and it’s absolutely night and day. There is remarkably little hiss present here and artists that use the MacBook as a sketch pad for vocals and other recording are going to get a really nice little surprise here.

I haven’t been able to test it against external mics myself but I was able to listen to rigs that involved a Blue Yeti and other laptop microphones and the MacBook’s new mic array was clearly better than any of the machines and held its own against the Yeti. 

The directional nature of many podcast mics is going to keep them well in advance of the internal mic on the MacBook for the most part, but for truly mobile recording setups the MacBook mic just went from completely not an option to a very viable fallback in one swoop. It really has to be listened to in order to get it. 

I doubt anyone is going to buy a MacBook Pro for the internal mic, but having a ‘pro level’ device finally come with a pro level mic on board is super choice. 

I think that’s most of it, though I feel like I’m forgetting something…

Oh right, the Keyboard

Ah yes. I don’t really need to belabor the point on the MacBook Pro keyboards just not being up to snuff for some time. Whether you weren’t a fan of the short throw on the new butterfly keyboards or you found yourself one of the many people (yours truly included) who ran up against jammed or unresponsive keys on that design — you know that there has been a problem.

The keyboard situation has been written about extensively by Casey Johnston and Joanna Stern and complained about by every writer on Twitter over the past several years. Apple has offered a succession of updates to that keyboard to attempt to make it more reliable and has extended warranty replacements to appease customers. 

But the only real solution was to ditch the design completely and start over. And that’s what this is: a completely new keyboard.

Apple is calling it the Magic Keyboard in homage to the iMac’s Magic Keyboard (but not identically designed). The new keyboard is a scissor mechanism, not butterfly. It has 1mm of key travel (more, a lot more) and an Apple-designed rubber dome under the key that delivers resistance and springback that facilitates a satisfying key action. The new keycaps lock into the keycap at the top of travel to make them more stable when at rest, correcting the MacBook Air-era wobble. 

And yes, the keycaps can be removed individually to gain access to the mechanism underneath. And yes, there is an inverted-T arrangement for the arrow keys. And yes, there is a dedicated escape key.

Apple did extensive physiological research when building out this new keyboard. One test was measuring the effect of a keypress on a human finger. Specifically, they measured the effect of a key on the pacinian corpuscles at the tips of your fingers. These are onion-esque structures in your skin that house nerve endings and they are most sensitive to mechanical and vibratory pressure. 

Apple then created this specialized plastic dome that sends a specific vibration to this receptor making your finger send a signal to your brain that says ‘hey you pressed that key.’ This led to a design that gives off the correct vibration wavelength to return a satisfying ‘stroke completed’ message to the brain.

There is also more space between the keys, allowing for more definitive strokes. This is because the keycaps themselves are slightly smaller. The spacing does take some adjustment, but by this point in the article I am already getting pretty proficient and am having more grief from the autocorrect feature of Catalina than anything else. 

Notably, this keyboard is not in the warranty extension program that Apple is applying to its older keyboard designs. There is a standard 1 year warranty on this model, a statement by the company that they believe in the durability of this new design? Perhaps. It has to get out there and get bashed on by more violent keyboard jockeys than I for a while before we can tell whether it’s truly more resilient. 

But does this all come together to make a more usable keyboard? In short, yes. The best way to describe it in my opinion is a blend between the easy cushion of the old MacBook Air and the low profile stability of the Magic Keyboard for iMac. It’s truly one of the best feeling keyboards they’ve made in years and perhaps ever in the modern era. I reserve the right to be nostalgic about deep throw mechanical keyboards in this regard, but this is the next best thing. 

Pro, or Pro

In my brief and admittedly limited testing so far, the 16” MacBook Pro ends up looking like it really delivers on the Pro premise of this kind of machine in ways that have been lacking for a while in Apple’s laptop lineup. The increased storage caps, bigger screen, bigger battery and redesigned keyboard should make this an insta-buy for anyone upgrading from a 2015 MacBook Pro and a very tempting upgrade for even people on newer models that have just never been happy with the typing experience. 

Many of Apple’s devices with the label Pro lately have fallen into the bucket of ‘the best’ rather than ‘for professionals’. This isn’t strictly a new phenomenon for Apple, but more consumer centric devices like the AirPods Pro and the iPhone Pro get the label now than ever before. 

But the 16” MacBook Pro is going to alleviate a lot of the pressure Apple has been under to provide an unabashedly Pro product for Pro Pros. It’s a real return to form for the real Mack Daddy of the laptop category. As long as this new keyboard design proves resilient and repairable I think this is going to kick off a solid new era for Apple portables.

Sahara Reporters founder Sowore remains detained in Nigeria

The founder of African investigative digital media site Sahara Reporters Omoyele Sowore remains detained in Nigeria on charges including treason, his wife Opeyemi Sowore told TechCrunch.

Her husband founded Sahara Reporters to create and aggregate news content, social media tips, and self-digital reporting toward exposing corruption in Africa and his home country of Nigeria.

After being jailed and beaten several times for his journalistic work in Nigeria, Sowore re-located to New York City and formed Sahara Reporters in Manhattan in 2006 to report under U.S. legal protections.

Several outlets, including Reuters, reported his arrest in August 2019. According to Opeyemi Sowore — who lives in New Jersey — her husband was detained in Lagos on August 4th while at a protest. He was then transferred to Nigeria’s capital, Abuja.

Per social media and press reporting, Omoyele Sowore (who goes by Sowore), was participating in #RevolutionNow movement of peaceful demonstration against bad governance in Nigeria. 

After several hearings, he is still being held in Abuja, his wife said.

Sowore CourtAccording to a copy of his court charging document obtained by TechCrunch, Sowore is charged with two counts of conspiring to stage a revolution and to remove Nigeria’s president, Muhammadu Buhari, from office “otherwise than by constitutional means.”

Sowore is also charged with cybercrimes for “knowingly send[ing] messages by means of a press interview granted on Arise Television…for the purpose of causing insult…and ill-will on the…President of the Federal Republic of Nigeria” and for money laundering based on a transfer of $19,975 from a Nigerian bank account to a Sahara Reporters held account in New York.

Sowore pleaded not guilty to the charges and rejected an offer of bail for roughly $800,000, according to press reports and his wife.

As for the veracity of the charges, Sowore’s wife Opeyemi believes they are a cover to go after her husband for his activism and work with Sahara Reporters.

Sowore has never been an advocate of violence or insurrection, according to his wife. 

“If you look at his history he is the most peaceful person. He does what he does so Nigeria can work for all Nigerians…be inclusive of all ethnic groups, all socio-economic backgrounds, and religions,” Opeyemi Sowore said.

“I think the charges are about silencing a critical voice that’s shining light on corruption,” she added.

Not everyone is a fan of Sowore and Sahara Reporters’ work, particularly in Nigeria. The country has  has made strides in improving infrastructure and governance and has one of Africa’s strongest economies and tech scenes.

But Nigeria is still plagued by corruption, particularly around its oil-resources, and has a steady-stream of multi-billion dollar scandals yes billions in state related funds being stolen or simply going missing.

Sahara Reporters has made a practice of reporting on such corruption. The site, which has a tips line and small TV station, has exposed improprieties of many public officials and forced a number of resignations in Nigeria’s government.

Sahara Reporters

In the previous administration of President Goodluck Jonathan, Sahara Reporters played a role in exposing the theft of an estimated $20 billion in public funds by Petroleum Minister, Diezani Allison-Madueke, who was forced to resign and eventually arrested.

The internet, mobile, and digital media play a central role in the work of Sahara Reporters. In an interview in 2014, Sowore explained to me how these mediums often do much of the investigative work.

“In many cases, there’s less investigation to breaking these stories than you’d think. The corruption and who’s perpetrating it is generally well-known and the evidence easy to distribute through social media and devices. We just need a safe place to report it from, and the rest often takes care of itself,” Sowore said.

Ironically, Sowre’s own thesis of using digital and social media for advocacy may be tested on his getting out of jail.

Sowore’s wife is working on a campaign of global supporters — including Amnesty International — to shine a light on her husband’s charges, innocence, and press for his release.

Away from the activism and politics, “I want Yele to come home safely. I’m worried about his safety and we have two small children and they miss their father dearly,” Opeyemi Sowore said.

The trial for her husband Omoyele Sowore is scheduled for early November.

 

 

 

 

 

 

 

 

Another US visa holder was denied entry over someone else’s messages

It has been one week since U.S. border officials denied entry to a 17-year-old Harvard freshman just days before classes were set to begin.

Ismail Ajjawi, a Palestinian student living in Lebanon, had his student visa canceled and was put on a flight home shortly after arriving at Boston Logan International Airport. Customs & Border Protection officers searched his phone and decided he was ineligible for entry because of his friends’ social media posts. Ajjawi told the officers he “should not be held responsible” for others’ posts, but it was not enough for him to clear the border.

The news prompted outcry and fury. But TechCrunch has learned it was not an isolated case.

Since our story broke, we came across another case of a U.S. visa holder who was denied entry to the country on grounds that he was sent a graphic WhatsApp message. Dakhil — whose name we have changed to protect his identity — was detained for hours, but subsequently had his visa canceled. He was sent back to Pakistan and banned from entering the U.S. for five years.

Since 2015, the number of device searches has increased four-fold to over 30,200 each year. Lawmakers have accused the CBP of conducting itself unlawfully by searching devices without a warrant, but CBP says it does not need to obtain a warrant for device searches at the border. Several courts have tried to tackle the question of whether or not device searches are constitutional.

Abed Ayoub, legal and policy director at the American-Arab Anti-Discrimination Committee, told TechCrunch that device searches and subsequent denials of entry had become the “new normal.”

This is Dakhil’s story.

* * *

As a a Pakistani national, Dakhil needed a visa to enter the U.S. He obtained a B1/B2 visa, which allowed him to temporarily enter the U.S. for work and to visit family. Months later, he arrived at George Bush Intercontinental Airport in Houston, Texas, tired but excited to see his cousin for the first time in years.

It didn’t take long before Dakhil realized something wasn’t right.

Dakhil, who had never traveled to the U.S. before, was waiting in the immigration line at the border when a CBP officer approached him to ask why he had traveled to the U.S. He said it was for a vacation to visit his family. The officer took his passport and, after a brief examination of its stamps, asked why Dakhil had visited Saudi Arabia. It was for Hajj and Umrah, he said. As a Muslim, he is obliged to make the pilgrimages to Mecca at least once in his lifetime. The officer handed back his passport and Dakhil continued to wait in line.

At his turn, Dakhil approached the CBP officer in his booth, who repeated much of the same questions. But, unsatisfied with his responses, the officer took Dakhil to a small room close but separate from the main immigration hall.

“He asked me everything,” Dakhil told TechCrunch. The officer asked about his work, his travel history and how long he planned to stay in the U.S. He told the officer he planned to stay for three months with a plan to travel to Disney World in Florida and later New York City with his wife and newborn daughter, who were still waiting for visas.

The officer then rummaged through Dakhil’s carry-on luggage, pulling out his computer and other items. Then the officer took Dakhil’s phone, which he was told to unlock, and took it to another room.

For more than six hours, Dakhil was forced to sit in a bright, cold and windowless airport waiting room. There was nowhere to lie down. Others had pushed chairs together to try to sleep.

dhakil i213 front

A U.S. immigration form detailing Dakhil deportation.

Dakhil said when the officer returned, the questioning continued. The officer demanded to know more about what he was planning to do in the U.S. One line of questioning focused on an officer’s accusation that Dakhil was planning to work at a gas station owned by his cousin — which Dakhil denied.

“I told him I had no intention to work,” he told TechCrunch. The officer continued with his line of questioning, he said, but he continued to deny that he wanted to stay or work in the U.S. “I’m quite happy back in Karachi and doing good financially,” he said.

Two more officers had entered the room and began to interrogate him as the first officer continued to search bags. At one point he pulled out a gift for his cousin — a painting with Arabic inscriptions.

But Dakhil was convinced he would be allowed entry — the officers had found nothing derogatory, he said.

“Then the officer who took my phone showed me an image,” he told TechCrunch. It was an image from 2009 of a child, who had been murdered and mutilated. Despite the graphic nature of the image, TechCrunch confirmed the photo was widely distributed on the internet and easily searchable using the name of the child’s murderer.

“I was shocked. What should I say?” he told TechCrunch, describing the panic he felt. “This image is disturbing, but you can’t control the forwarded messages,” he explained.

Dakhil told the officer that the image was sent to him in a WhatsApp group. It’s difficult to distinguish where a saved image came from on WhatsApp, because it automatically downloads received images and videos to a user’s phone. Questionable content — even from unsolicited messages — found during a border search could be enough to deny the traveler entry.

The image was used to warn parents about kidnappings and abductions of children in his native Karachi. He described it as one of those viral messages that you forward to your friends and family to warn parents about the dangers to their children. The officer pressed for details about who sent the message. Dakhil told the officer that the sender was someone he met on his Hajj pilgrimage in 2011.

“We hardly knew each other,” he said, saying they stayed in touch through WhatsApp but barely spoke.

Dakhil told the officer that the image could be easily found on the internet, but the officer was more interested in the names of the WhatsApp group members.

“You can search the image over the internet,” Dakhil told the officer. But the officer declined and said the images were his responsibility. “We found this on your cellphone,” the officer said. At one point the officer demanded to know if Dakhil was organ smuggling.

After 15 hours answering questions and waiting, the officers decided that Dakhil would be denied entry and would have his five-year visa cancelled. He was also told his family would also have their visas cancelled. The officers asked Dakhil if he wanted to claim for asylum, which he declined.

“I was treated like a criminal,” Dakhil said. “They made my life miserable.”

* * *

It’s been almost nine months since Dakhil was turned away at the U.S. border.

He went back to the U.S. Embassy in Karachi twice to try to seek answers, but embassy officials said they could not reverse a CBP decision to deny a traveler entry to the United States. Frustrated but determined to know more, Dakhil asked for his records through a Freedom of Information Act (FOIA) request — which anyone can do — but had to pay hundreds of dollars for its processing.

He provided TechCrunch with the documents he obtained. One record said that Dakhil was singled out because his name matched a “rule hit,” such as a name on a watchlist or a visit to a country under sanctions or embargoes, which typically requires additional vetting before the traveler can be allowed into the U.S.

The record did not say what flagged Dakhil for additional screening, and his travel history did not include an embargoed country.

Screen Shot 2019 08 30 at 3.30.00 PM 2

CBP’s reason for denying entry to Dakhil obtained through a FOIA request.

One document said CBP denied Dakhil entry to the U.S. “due to the derogatory images found on his cellphone,” and his alleged “intent to engage in unauthorized employment during his entry.” But Dakhil told TechCrunch that he vehemently denies the CBP’s allegations that he was traveling to the U.S. to work.

He said the document portrays a different version of events than what he experienced.

“They totally changed this scenario,” he said, rebutting several remarks and descriptions reported by the officers. “They only disclosed what they wanted to disclose,” he said. “They want to justify their decision, so they mentioned working in a gas station by themselves,” he claimed.

The document also said Dakhil “was permitted to view the WhatsApp group message thread on his phone and he stated that it was sent to him in September 2018,” but this was not enough to satisfy the CBP officers who ruled he should be denied entry. The document said Dakhil stated that he “never took this photo and doesn’t believe [the sender is] involved either,” but he was “advised that he was responsible for all the contents on his phone to include all media and he stated that he understood.”

The same document confirmed the contents of his phone was uploaded to the CBP’s central database and provided to the FBI’s Joint Terrorism Task Force.

Dakhil was “found inadmissible” and was put on the next flight back to Karachi, more than a day after he was first approached by the CBP officer in the immigration line.

A spokesperson for Customs & Border Protection declined to comment on individual cases, but provided a boilerplate statement.

“CBP is responsible for ensuring the safety and admissibility of the goods and people entering the United States. Applicants must demonstrate they are admissible into the U.S. by overcoming all grounds of inadmissibility including health-related grounds, criminality, security reasons, public charge, labor certification, illegal entrants and immigration violations, documentation requirements, and miscellaneous grounds,” the spokesperson said. “This individual was deemed inadmissible to the United States based on information discovered during the CBP inspection.”

CBP said it also has the right to cancel visas if a traveler is deemed inadmissible to the United States.

It’s unlikely Dakhil will return to the U.S., but he said he had hope for the Harvard student who suffered a similar fate.

“Let’s hope he can fight and make it,” he said.

The risks of amoral A.I.

Artificial intelligence is now being used to make decisions about lives, livelihoods, and interactions in the real world in ways that pose real risks to people.

We were all skeptics once. Not that long ago, conventional wisdom held that machine intelligence showed great promise, but it was always just a few years away. Today there is absolute faith that the future has arrived.

It’s not that surprising with cars that (sometimes and under certain conditions) drive themselves and software that beats humans at games like chess and Go. You can’t blame people for being impressed.

But board games, even complicated ones, are a far cry from the messiness and uncertainty of real-life, and autonomous cars still aren’t actually sharing the road with us (at least not without some catastrophic failures).

AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. There is a general feeling that technology is inherently neutral — even among many of those developing AI solutions. But AI developers make decisions and choose tradeoffs that affect outcomes. Developers are embedding ethical choices within the technology but without thinking about their decisions in those terms.

These tradeoffs are usually technical and subtle, and the downstream implications are not always obvious at the point the decisions are made.

The fatal Uber accident in Tempe, Arizona, is a (not-subtle) but good illustrative example that makes it easy to see how it happens.

The autonomous vehicle system actually detected the pedestrian in time to stop but the developers had tweaked the emergency braking system in favor of not braking too much, balancing a tradeoff between jerky driving and safety. The Uber developers opted for the more commercially viable choice. Eventually autonomous driving technology will improve to a point that allows for both safety and smooth driving, but will we put autonomous cars on the road before that happens? Profit interests are pushing hard to get them on the road immediately.

Physical risks pose an obvious danger, but there has been real harm from automated decision-making systems as well. AI does, in fact, have the potential to benefit the world. Ideally, we mitigate for the downsides in order to get the benefits with minimal harm.

A significant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm. 

Buyer Beware

Buyers of the technology are at a disadvantage when they know so much less about it than the sellers do. For the most part decision makers are not equipped to evaluate intelligent systems. In economic terms, there is an information asymmetry that puts AI developers in a more powerful position over those who might use it. (Side note: the subjects of AI decisions generally have no power at all.) The nature of AI is that you simply trust (or not) the decisions it makes. You can’t ask technology why it decided something or if it considered other alternatives or suggest hypotheticals to explore variations on the question you asked. Given the current trust in technology, vendors’ promises about a cheaper and faster way to get the job done can be very enticing.

So far, we as a society have not had a way to assess the value of algorithms against the costs they impose on society. There has been very little public discussion even when government entities decide to adopt new AI solutions. Worse than that, information about the data used for training the system plus its weighting schemes, model selection, and other choices vendors make while developing the software are deemed trade secrets and therefore not available for discussion.

Image via Getty Images / sorbetto

The Yale Journal of Law and Technology published a paper by Robert Brauneis and Ellen P. Goodman where they describe their efforts to test the transparency around government adoption of data analytics tools for predictive algorithms. They filed forty-two open records requests to various public agencies about their use of decision-making support tools.

Their “specific goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness”. Nearly all of the agencies involved were either unwilling or unable to provide information that could lead to an understanding of how the algorithms worked to decide citizens’ fates. Government record-keeping was one of the biggest problems, but companies’ aggressive trade secret and confidentiality claims were also a significant factor.

Using data-driven risk assessment tools can be useful especially in cases identifying low-risk individuals who can benefit from reduced prison sentences. Reduced or waived sentences alleviate stresses on the prison system and benefit the individuals, their families, and communities as well. Despite the possible upsides, if these tools interfere with Constitutional rights to due process, they are not worth the risk.

All of us have the right to question the accuracy and relevance of information used in judicial proceedings and in many other situations as well. Unfortunately for the citizens of Wisconsin, the argument that a company’s profit interest outweighs a defendant’s right to due process was affirmed by that state’s supreme court in 2016.

Fairness is in the Eye of the Beholder

Of course, human judgment is biased too. Indeed, professional cultures have had to evolve to address it. Judges for example, strive to separate their prejudices from their judgments, and there are processes to challenge the fairness of judicial decisions.

In the United States, the 1968 Fair Housing Act was passed to ensure that real-estate professionals conduct their business without discriminating against clients. Technology companies do not have such a culture. Recent news has shown just the opposite. For individual AI developers, the focus is on getting the algorithms correct with high accuracy for whatever definition of accuracy they assume in their modeling.

I recently listened to a podcast where the conversation wondered whether talk about bias in AI wasn’t holding machines to a different standard than humans—seeming to suggest that machines were being put at a disadvantage in some imagined competition with humans.

As true technology believers, the host and guest eventually concluded that once AI researchers have solved the machine bias problem, we’ll have a new, even better standard for humans to live up to, and at that point the machines can teach humans how to avoid bias. The implication is that there is an objective answer out there, and while we humans have struggled to find it, the machines can show us the way. The truth is that in many cases there are contradictory notions about what it means to be fair.

A handful of research papers have come out in the past couple of years that tackle the question of fairness from a statistical and mathematical point-of-view. One of the papers, for example, formalizes some basic criteria to determine if a decision is fair.

In their formalization, in most situations, differing ideas about what it means to be fair are not just different but actually incompatible. A single objective solution that can be called fair simply doesn’t exist, making it impossible for statistically trained machines to answer these questions. Considered in this light, a conversation about machines giving human beings lessons in fairness sounds more like theater of the absurd than a purported thoughtful conversation about the issues involved.

Image courtesy of TechCrunch/Bryce Durbin

When there are questions of bias, a discussion is necessary. What it means to be fair in contexts like criminal sentencing, granting loans, job and college opportunities, for example, have not been settled and unfortunately contain political elements. We’re being asked to join in an illusion that artificial intelligence can somehow de-politicize these issues. The fact is, the technology embodies a particular stance, but we don’t know what it is.

Technologists with their heads down focused on algorithms are determining important structural issues and making policy choices. This removes the collective conversation and cuts off input from other points-of-view. Sociologists, historians, political scientists, and above all stakeholders within the community would have a lot to contribute to the debate. Applying AI for these tricky problems paints a veneer of science that tries to dole out apolitical solutions to difficult questions. 

Who Will Watch the (AI) Watchers?

One major driver of the current trend to adopt AI solutions is that the negative externalities from the use of AI are not borne by the companies developing it. Typically, we address this situation with government regulation. Industrial pollution, for example, is restricted because it creates a future cost to society. We also use regulation to protect individuals in situations where they may come to harm.

Both of these potential negative consequences exist in our current uses of AI. For self-driving cars, there are already regulatory bodies involved, so we can expect a public dialog about when and in what ways AI driven vehicles can be used. What about the other uses of AI? Currently, except for some action by New York City, there is exactly zero regulation around the use of AI. The most basic assurances of algorithmic accountability are not guaranteed for either users of technology or the subjects of automated decision making.

GettyImages 823303786

Image via Getty Images / nadia_bormotova

Unfortunately, we can’t leave it to companies to police themselves. Facebook’s slogan, “Move fast and break things” has been retired, but the mindset and the culture persist throughout Silicon Valley. An attitude of doing what you think is best and apologizing later continues to dominate.

This has apparently been effective when building systems to upsell consumers or connect riders with drivers. It becomes completely unacceptable when you make decisions affecting people’s lives. Even if well-intentioned, the researchers and developers writing the code don’t have the training or, at the risk of offending some wonderful colleagues, the inclination to think about these issues.

I’ve seen firsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.

Recently we learned that Amazon abandoned an in-house technology that they had been testing to select the best resumes from among their applicants. Amazon discovered that the system they created developed a preference for male candidates, in effect, penalizing women who applied. In this case, Amazon was sufficiently motivated to ensure their own technology was working as effectively as possible, but will other companies be as vigilant?

As a matter of fact, Reuters reports that other companies are blithely moving ahead with AI for hiring. A third-party vendor selling such technology actually has no incentive to test that it’s not biased unless customers demand it, and as I mentioned, decision makers are mostly not in a position to have that conversation. Again, human bias plays a part in hiring too. But companies can and should deal with that.

With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.

Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the effects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its fitness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and tradeoffs.

At this point, we might have to face the fact that our current uses of AI are getting ahead of its capabilities and that using it safely requires a lot more thought than it’s getting now.

Tastemakers raises $1.4M to sell Africa experiences to the world

New York based startup Tastemakers has raised a $1.4 million seed-round—led Precursor Ventures—for its business that connects Africa adventures to global consumers.

Tastemakers’ platform curates, prices, and lists African travel and cultural experiences—from paragliding tours to wine-tasting to concerts.

The startup generates revenues by taking a 20% commission on each transaction. Community managers in Africa screen and select experiences that go up on the site .

Tastemakers will use the investment to grow the number of experiences offered from 200 to 10,000 and build out machine learning capabilities to better match suppliers, experiences, and clients—CEO and founder Cherae Robinson told TechCrunch.

She likened the site to an Airbnb for commoditizing and connecting people to Africa travel experiences at scale.

On the startup’s addressable market, Robinson references a segment of culture curious travelers: people who are travelling to experience things such foreign art, food, music, or dance workshops.

“We looked at who’s doing these kinds of tours and and the number of people booking…and we found that globally, based on triangulating that, there are about 700 million people globally booking culture forward experiences,” said Robinson.

For different reasons—from negative stereotypes or the difficulty of identifying tourist options in Africa—most of these excursions are occurring in other parts of the world, according to Robinson.

She sees Tastemakers’ value proposition as the site that can bring a greater percentage of these culture travelers to Africa.

On revenue potential, Robinson is pretty up front on numbers and goals. “If we can capture 1% of that [700 million] market in the next five years that’s $2.2 billion generated on our platform,” she said, noting an average booking cost of $308. She believes Tastemakers could hit those figures by 2025—and by applying their 20 percent commission—reach income of $434 million.

Tastemakers Africa Ghana III

Precursor Ventures Managing Partner Charles Hudson invested in Tastemakers for its potential as an early entrant in an off the grid travel market attracting more curiosity.

“I just had a sense that Africa was having a moment, and whether its Black Panther or more startups that have a foot in Africa, that there were more people interested in going to Africa,” he told TechCrunch.

“And it’s not like going to New York City…You have providers that are hard to find and hard to book..that are not super well marketed. If you can become an aggregator and curator of those, you could effectively become the largest source of lead generation,” Hudson said.

Tastemakers is looking at  ancillary partnership and revenue share opportunities. It uses Stripe and WorldRemit to process mobile payments for transactions on the site and has done promotional partnerships with Uber Africa. The startup also counts Kempinski Hotels as its biggest lodging partner.

Tastemakers also offers advisory services to sellers on the site, to better determine price-points and on marketing their travel experiences more effectively online.

CEO Cherae Robinson is clear about the company’s for-profit status, but sees upside for Africa beyond generating business from tourism. “I strategically don’t brand Tastemakers as a social impact startup…but we’re driving benefits of the sharing economy to diverse populations both in Africa and in underrepresented communities in the technology and tourism sectors,” she said.

 

Artificial intelligence can contribute to a safer world

We all see the headlines nearly every day. A drone disrupting the airspace in one of the world’s busiest airports, putting aircraft at risk (and inconveniencing hundreds of thousands of passengers) or attacks on critical infrastructure. Or a shooting in a place of worship, a school, a courthouse. Whether primitive (gunpowder) or cutting-edge (unmanned aerial vehicles) in the wrong hands, technology can empower bad actors and put our society at risk, creating a sense of helplessness and frustration.

Current approaches to protecting our public venues are not up to the task, and, frankly appear to meet Einstein’s definition of insanity: “doing the same thing over and over and expecting a different result.” It is time to look past traditional defense technologies and see if newer approaches can tilt the pendulum back in the defender’s favor. Artificial Intelligence (AI) can play a critical role here, helping to identify, classify and promulgate counteractions on potential threats faster than any security personnel.

Using technology to prevent violence, specifically by searching for concealed weapons has a long history. Alexander Graham Bell invented the first metal detector in 1881 in an unsuccessful attempt to locate the fatal slug as President James Garfield lay dying of an assassin’s bullet. The first commercial metal detectors were developed in the 1960s. Most of us are familiar with their use in airports, courthouses and other public venues to screen for guns, knives and bombs.

However, metal detectors are slow and full of false positives – they cannot distinguish between a Smith & Wesson and an iPhone. It is not enough to simply identify a piece of metal; it is critical to determine whether it is a threat. Thus, the physical security industry has developed newer approaches, including full-body scanners – which are now deployed on a limited basis. While effective to a point, the systems in use today all have significant drawbacks. One is speed. Full body scanners, for example, can process only about 250 people per hour, not much faster than a metal detector. While that might be okay for low volume courthouses, it’s a significant problem for larger venues like a sporting arena.

Image via Getty Images

Fortunately, new AI technologies are enabling major advances in physical security capabilities. These new systems not only deploy advanced sensors to screen for guns, knives and bombs, they get smarter with each screen, creating an increasingly large database of known and emerging threats while segmenting off alarms for common, non-threatening objects (keys, change, iPads, etc.)

As part of a new industrial revolution in physical security, engineers have developed a welcomed approach to expediting security screenings for threats through machine learning algorithms, facial recognition, and advanced millimeter wave and other RF sensors to non-intrusively screen people as they walk through scanning devices. It’s like walking through sensors at the door at Nordstrom, the opposite of the prison-like experience of metal detectors with which we are all too familiar. These systems produce an analysis of what someone may be carrying in about a hundredth of a second, far faster than full body scanners. What’s more, people do not need to empty their pockets during the process, further adding speed. Even so, these solutions can screen for firearms, explosives, suicide vests or belts at a rate of about 900 people per hour through one lane.

Using AI, advanced screening systems enable people to walk through quickly and provide an automated decision but without creating a bottleneck. This volume greatly improves traffic flow while also improving the accuracy of detection and makes this technology suitable for additional facilities such as stadiums and other public venues such as Lincoln Center in New York City and the Oakland airport.

Apollo Shield’s anti-drone system.

So much for the land, what about the air?   Increasingly drones are being used as weapons. Famously, this was seen in a drone attack last year against Venezuelan president Nicolas Maduro. An airport drone incident drew widespread attention when a drone shut down Gatwick Airport in late 2018 inconveniency stranded tens of thousands of people.

People are rightly concerned about how easy it is to get a gun. Drones are also easy to acquire and operate, and quite difficult to monitor and to defend against. AI is now being deployed to prevent drone attacks, whether at airports, stadiums, or critical infrastructure. For example, new AI-powered radar technology is being used to detect, classify, monitor and safely capture drones identified as dangerous.

Additionally, these systems use can rapidly develop a map of the airspace and effectively create a security “dome” around specific venues or areas. These systems have an integration component to coordinate with on-the-ground security teams and first responders. Some even have a capture drone to incarcerate a suspicious drone. When a threatening drone is detected and classified by the system as dangerous, the capture drone is dispatched and nets the invading drone. The hunter then tows the targeted drone to a safe zone for the threat to be evaluated and if needed, destroyed.

While there is much dialogue about the potential risk of AI affecting our society, there is also a positive side to these technologies. Coupled with our best physical security approaches, AI can help prevent violent incidents.

AT&T rolls out (limited) 5G in (parts of) New York City

Both Verizon and Sprint have been promising 5G coverage in the nation’s largest city for some time now. AT&T this morning, however, said it’s starting to do just that. The U.S.’s largest carrier by subscribers announced limited availability of 5G coverage in New York City.

The typical not-so-fine print applies to the news this morning. The service will be limited to business users at launch — and only available in a select number of areas. In other words, don’t go running out and buying a 5G phone just yet, if you’re an AT&T customer in the five boroughs.

On the plus side, 5G+ is the real deal, unlike the deceptively named 5GE that came before it. And AT&T’s being reasonably transparent about the limited nature of the roll out.

“As a densely-populated, global business and entertainment hub, New York City stands to benefit greatly from having access to 5G, and we’ve been eager to introduce the service here,” AT&T’s New York President Amy Kramer said in a release. “While our initial availability in NYC is a limited introduction at launch, we’re committed to working closely with the City to extend coverage to more neighborhoods throughout the five boroughs.”

Per CNET, the rollout is limited to a small section of Manhattan for the time being, including, “near and around East Village, Greenwich Village and Gramercy Park.” Business users can access the service using Samsung’s Galaxy S10 5G on the carrier’s Business Unlimited Preferred plan.

New Google Area 120 project Shoelace aims to connect people around shared interests

A new project from Google’s in-house incubator, Area 120, aims to help people find things to do and others who share your same interests. Through a new app called Shoelace — a name designed to make you think of tying things together — users can browse through a set of hand-picked activities, or add their own to a map. For example, someone who wanted to connect with fellow dog owners could start an activity for a doggie playdate at the park, then start a group chat to coordinate the details and make new friends.

The end result feels a bit like a mashup of Facebook Events with a WhatsApp group chat, perhaps. But it’s wrapped in a clean, modern design that appeals more to the millennial or Gen Z user.

Like Meetup and others in the space, Shoelace’s focus is not on building yet another social networking app, but rather on leveraging a social app to inspire real-world connections.

This is not a novel idea. In fact, startups many times over have tried to create an alternative to Facebook by offering tools to connect users around locations or shared interests, instead of only re-creating users’ established friend networks online. And many cities today have their own social clubs designed to help people make new friends and participate in fun, local activities.

Screen Shot 2019 07 11 at 2.06.41 PM

Shoelace is still in invite-only testing and only offered in New York City, for the time being.

However, its website says that the long-term goal is to bring the app to cities nationwide after the team learns what does and does not work. There’s also a form that will allow you to request Shoelace in your own community.

Google has had a rocky history when it comes to social networking products. Its largest effort to date, Google+, finally wound down its consumer business in April. That said, Shoelace is not really a “Google” product — it’s a project built by Googlers as a part of the Area 120 incubator, where employees can experiment with new ideas full-time without having to leave the company.

“One of the many projects that we’re working on within Area 120 is Shoelace, an app that helps people meet others with similar interests in person through curated activities,” a Google spokesperson confirmed to TechCrunch. “Like other projects within Area 120, it’s an early experiment so there aren’t many details to share right now,” they said.

The app is live on Google Play and iOS (TestFlight) for those who have received an invite.

Sidewalk Labs’ blueprint for a ‘mini’ smart city is a massive data mine

Sidewalk Labs, the smart city technology firm owned by Google’s parent company Alphabet, released a plan this week to redevelop a piece of Toronto’s eastern waterfront into its vision of an urban utopia — a ‘mini’ metropolis tucked inside a digital infrastructure burrito and bursting with gee-whiz tech-ery.

A place where high-tech jobs and affordable housing live in harmony, streets are built for people, not just cars, all the buildings are sustainable and efficient, public spaces are dotted with internet-connected sensors and an outdoor comfort system with giant “raincoats” designed to keep residents warm and dry even in winter. The innovation even extends underground, where freight delivery system ferries packages without the need of street-clogging trucks. 

But this plan is more than a testbed for tech. It’s a living lab (or petri dish, depending on your view), where tolerance for data collection and expectations for privacy are being shaped, public due process and corporate reach is being tested, and what makes a city equitable and accessible for all is being defined.

It’s also more ambitious and wider in scope than its original proposal.

“In many ways, it was like a 50-sided Rubik’s cube when you’re looking at initiatives across mobility, sustainability, the public realm, buildings and housing and digital governance,” Sidewalk Labs CEO Dan Doctoroff said Monday describing the effort to put together the master plan called Toronto Tomorrow: A New Approach for Inclusive Growth.

Even the harshest critics of the Sidewalk Labs plan might agree with Doctoroff’s Rubik cube analogy. It’s a complex plan with big promises and high stakes. And despite the 1,500-plus page tome presenting the idea, it’s still opaque.

Airbus-owned Voom will compete with Uber Copter in the U.S. in 2019

The U.S. air taxi market is heating up: Aeronautics industry giant Airbus will be among the companies operating on-demand air travel service in 2019 in American skies, FastCompany reports. Airbus’ Voom on-demand helicopter shuttle operation will set up shop in the U.S. starting this fall, after previously providing service exclusively in Latin America.

Uber announced its own Uber Copter service earlier this month, which will provide service from Manhattan to JFK airport starting in July, and Blade also already offers similar service between New York City and its three area airports, as well as Bay Area air shuttle routes. Airbus’ Voom is also going to expand to Asia in 2019, the company confirmed to FastCompany, and intends to cover 25 cities globally by 2025 with an anticipated passenger volume of two million people per year.

All of these companies see their helicopter service as an entry point for planned shifts to use of electric vertical takeoff and landing (eVTOL) craft. Airport shuttles seem to be the perfect use case for these early instantiates of air taxi services, since they greatly reduce travel times at peak hours, and also cater to clientele who are likely frequent traveler and can either expense or afford the ~$200 trips.