Amazon launches EC2 Container Registry for storing container images

Amazon chief technology officer Werner Vogels announces the EC2 Container Registry at the AWS re:Invent conference in Las Vegas on Oct. 8.

Amazon Web Services (AWS) today announced the EC2 Container Registry, a new tool for storing container images for parts of applications. The launch follows Google’s introduction of its Container Registry at the beginning of this year.

The tool is “a fully managed service … where you can launch your container applications from,” Amazon chief technology officer Werner Vogels said today at the AWS re:Invent conference.

AWS is also launching a new Service Scheduler to run containers across data centers in AWS’ multiple availability zones, as well as an integration of hot container startup Docker’s Compose open-source software and a new ECS command-line interface (CLI).

This builds on last year’s announcement of the EC2 Container Service for deploying applications onto Amazon cloud infrastructure. Microsoft, Google, and IBM all have their own comparable services in this area.

Details on the new service are in a new blog post from AWS chief evangelist Jeff Barr.

Find all our coverage of AWS re:Invent here.

SwiftKey taps neural networks for a new keyboard app that could improve predictive typing

SwiftKey Neural

SwiftKey has launched a new experimental version of its popular mobile keyboard app, one that could significantly improve the accuracy of predictive typing.

The SwiftKey Neural Alpha app is an Android-only affair for now and, as its name suggests, it’s still an early stage product so may be prone to bugs. But it’s for this reason that the London-based company elected to launch a separate, standalone app rather than integrate the features into its existing flagship app.

If you’re new to SwiftKey, the app has developed a solid reputation on Android for a number of years, replacing the default keyboard app on phones and tablets. It learns your writing style over time to speed up typing, and even predicts the next word before you’ve started typing it — this is partly based on historical patterns.

From VentureBeat
Got translation? You got problems. We’re here to help. Localization and translation tips from the best minds in marketing.

The new SwiftKey Neural app uses artificial neural networks (ANNs) to predict and correct language. ANNs represent part of the broader field of machine learning and artificial intelligence, and is more directly based on the structure and workings of the human brain. This is in contrast to SwiftKey’s existing n-gram model, which relates to probability and computational linguistics.

“It gives the ability for Swiftkey to predict and suggest words in a way that’s more meaningful and more like how language is actually used by people,” said SwiftKey chief marketing officer (CMO) Joe Braidwood in an interview with VentureBeat.

Though it’s early days, this signals a notable step forward for mobile typing, and could lead to more meaningful, context-specific suggestions. Here’s a look at some examples of how the neural incarnation of SwiftKey can improve the app’s ability to second-guess what you want to type.

N-Gram vs. Neural

Above: 1. N-Gram vs. Neural

2. N-Gram vs. Neural

Above: 2. N-Gram vs. Neural

3. N-Gram vs. Neural

Above: 3. N-Gram vs. Neural

4. N-Gram vs. Neural

Above: 4. N-Gram vs. Neural

Though this represents the first times such technology has been implemented in keyboard app, Google recently dabbled with neural networks in an update to the Google Translate app.



Munchery Hires La Boulange Founder Pascal Rigo To Improve The Customer Experience

Pascal Rigo headshot On-demand food delivery startup Munchery just hired Pascal Rigo, the founder of upscale bakery La Boulange, which Starbucks bought for $100 million in 2012. As part of the deal, Rigo spent three years at Starbucks as its SVP of food to help the coffee giant roll out all of La Boulange’s products in the US. and Canada. They had six years to do all of that, but they finished it in three,… Read More

Why the beautiful, time-tested science of data visualization is so powerful

data visualization

This sponsored post is produced by Tableau.

For almost as long as we have been writing, we’ve been putting meaning into maps, charts, and graphs. Some 1,300 years ago, Chinese astronomers recorded the position of the stars and the shapes of the constellations. The Dunhuang star maps are the oldest preserved atlas of the sky.

Tableau image1


More than 500 years ago, the residents of the Marshall Islands learned to navigate the surrounding waters by canoe in the daytime — without the aid of stars. These master rowers learned to recognize the feel of the currents reflecting off the nearby islands. They visualized their insights on maps made of sticks, rocks, and shells.

Tableau image2

In the 1800s, Florence Nightingale used charts to explain to government officials how treatable diseases were killing more soldiers in the Crimean War than battle wounds. She knew that pictures would tell a more powerful story than numbers alone.

Tableau image3


Why does visualizing work so well, anyway?

Since long before spreadsheets and graphing software, we have communicated data through pictures. But we’ve only begun, in the last half-century, to understand why visualizations are such effective tools for seeing and understanding data.

It starts with the part of your brain called the visual cortex. Located near the bony lump at the back of your skull, it processes input from your eyes. Thanks to the visual cortex, our sense of sight provides information much faster than the other senses. We actually begin to process what we see before we think about it.

This is sound from an evolutionary perspective. The early human who had to stop and think, “Hmm, is that a jaguar sprinting toward me?” probably didn’t survive to pass on their genes. There is a biological imperative for our sense of sight to override cognition — in this case, for us to pay sharp attention to movement in our peripheral vision.

Today, our sight is more likely to save us on a busy street than on the savannah. Moving cars and blinking lights activate the same peripheral attention, helping us navigate a complicated visual environment. We see other cues on the street, too. Bright orange traffic cones mark hazards. Signs call out places, directions, and warnings. Vertical stripes on the street indicate lanes while horizontal lines indicate stop lines.

We have designed a rich, visual system that drivers can comprehend quickly, thanks to perceptual psychology. Our visual cortex is attuned to color hues (like safety orange), position (signs placed above road), and line orientation (lanes versus stop lines). Research has identified other visual features. Size, clustering, and shape also help us perceive our environment almost immediately.

What this means for you and me

Fortunately, our offices and homes tend to be safer than the savannah or the highway. Regardless, our lightning-quick sense of vision jumps into action even when we read email, tweets, or websites. And that right there is why data visualization communicates so powerfully and immediately: it takes advantage of these visual features, too.

A line graph immediately reveals upward or downward changes, thanks to the orientation of each segment. The axes of the graph use position to communicate values in relationship to each other. If there are multiple, colored lines, the color hue lets us rapidly tell the lines apart, no matter how many times they cross. Bar charts, maps with symbols, area graphs — these all use the visual superhighway in our brains to communicate meaning.

The early pioneers of data visualization were led by their intuition to use visual features like position, clustering, and hue. The longevity of those works is a testament to their power.

We now have software to help us visualize data and to turn tables of facts and figures into meaningful insights. That means anyone, even non-experts, can explore data in a way that wasn’t possible even 20 years ago. We can, all of us, analyze the world’s growing volume of data, spot trends and outliers, and make data-driven decisions.

Today, we don’t just have charts and graphs; we have the science behind them. We have started to unlock the principles of perception and cognition so we can apply them in new ways and in various combinations. A scatter plot can leverage position, hue, and size to visualize data. Its data points can interactively filter related charts, allowing the user to shift perspectives in their analysis by simply clicking on a point. Animating transitions as users pivot from one idea to the next brings previously hidden differences to the foreground. We’re building on the intuition of the pioneers and the conclusions of science to make analysis faster, easier, and more intuitive.

When humanity unlocked the science behind fire and magnets, we learned to harness chemistry and physics in new ways. And we revolutionized the world with steam engines and electrical generators.

Humanity is now at the dawn of a new revolution, and intuitive tools are putting the beautiful science of data visualization into the hands of millions of users.

I’m excited to see where you take all of us next.
Jeff Pettiross is Head of User Experience at Tableau.
Learn more about the power of visualizing data at the Tableau Conference in Las Vegas later this month. 

Sponsored posts are content that has been produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. The content of news stories produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].

Panel to explore the super-geek gaming future

Sinjin Bain, CEO of Maxplay

Where is gaming technology going?

Seamus Blackley of Innovative Leisure

Above: Seamus Blackley of Innovative Leisure

We’ve had tantalizing glimpses of the future with demos of augmented reality, virtual reality, user-generated content, esports, and the coolest new mobile devices. At the high end, gaming PCs using Intel’s Skylake processors, and dedicated graphics chips can power multiple screens and even motion simulators. For gamers, we only know that it’s going to get better.

It isn’t easy prognosticating about the future. Thomas J. Watson, the former chairman and founder of IBM, predicted in 1958 that there would be a global market for five computers. When we’re faced with uncertainty, we fill it things that we know. But we’ve gathered some interesting seers to make their own predictions in a panel entitled “The super-geek gaming future” at our GamesBeat 2015 event, located at the Grand Hyatt Union Square on October 12-13 in San Francisco. You can sign up for it now.

Chris Kohler, writer on gaming at Wired, will moderate the session. Our panelists include Sinjin Bain, CEO of Maxplay; Kristan “Krispy” Uccello, developer advocate for games at Google; Seamus Blackley, cofounder of the Xbox and Innovative Leisure; and William “Rhys” Dekle, director of worldwide business development at Microsoft Game Studios.

Chris Kohler of Wired

Above: Chris Kohler of Wired

Image Credit: Wired
Kristan "Krispy" Uccello, developer advocate for games at Google.

Above: Kristan “Krispy” Uccello, developer advocate for games at Google.

Image Credit: Google

Gaming has many kingdoms: mobile, console, PC online, geographic, and more. In each of these powerful realms, companies are fighting to grow fast, to come out on top, and to cross boundaries to rule more than one empire. Playing the competitive game, making alliances, prepping for new platforms like augmented and virtual reality, and surviving the incredibly fast change in gaming right now is more difficult than ever. It’s more complex than the fantasy world of Westeros in HBO’s Game of Thrones … and it’s happening right in front of us all.

At GamesBeat 2015, we’ll dissect how these kings and queens are battling for gaming supremacy and growth. We’ll find out who’s leading and how they are winning.

We’re screening our speakers for bold ideas, transparency, global strategies, creativity, and diversity. Our speakers show that gaming has become a global and diverse business with many intricacies and strategies. This year, we’ll have as many as 105 speakers and many well-known moderators over the course of two days.

William "Rhys" Dekle of Microsoft

Above: William “Rhys” Dekle of Microsoft

Image Credit: Microsoft

Our previously announced speakers include:

Tim Sweeney, CEO of Epic Games.

From VentureBeat
Gaming is in its golden age, and big and small players alike are maneuvering like kings and queens in A Game of Thrones. Register now for our GamesBeat 2015 event, Oct. 12-Oct.13, where we’ll explore strategies in the new world of gaming.

Anatoly Ropotov, CEO of mobile-game company Game Insight.

Graeme Devine, chief creative officer and vice president of games at Magic Leap.

Owen Mahoney, CEO of Nexon.

Michael Pachter, research analyst at Wedbush Securities.

Phil Sanderson, managing director at IDG Ventures.

Sunny Dhillon, partner at Signia Venture Partners.

Jason Rubin, head of Worldwide Studios at Oculus VR division of Facebook.

Shintaro Asako, the CEO of DeNA West.

Matt Wolf, global head of gaming at Coca-Cola.

Chris Fralic, partner at First Round Capital.

Niccolo De Masi, CEO of Glu Mobile.

Brianna Wu, head of development at Giant Spacekat.

Emily Greer, head of Kongregate.

Rajesh Rao, CEO of GameTantra and Dhruva Interactive.

Jessica Rovello, CEO of Arkadium.

Kate Edwards, executive director at the International Game Developers Association.

Thanks to the following industry leaders for supporting GamesBeat 2015: Game Insight as Featured Partner; Microsoft as Platinum Partner; RockYouAppLovin, and Samsung as Gold Partners; TrialPay and Authy (a Twilio company) as Silver Partners, and PlayPhone, Fuel Powered, Utomik, Glispa, and Bluestacks as Event Partners.

Our GamesBeat 2015 advisory board:

  • Ophir Lupu, head of video games at United Talent Agency
  • Jay Eum, managing director at TransLink Capital
  • Phil Sanderson, managing director at IDG Ventures
  • Sunny Dhillon, partner at Signia Venture Partners
  • Reinout te Brake, CEO of GetSocial
  • Mike Vorhaus of Magid Advisors

Employees Deplane From Flightcar As It Undergoes Major “Restructuring”

Screen Shot 2015-10-07 at 11.24.04 PM Imagine a young startup where two of three founders are pushed out the door. Imagine that this same startup parts ways with its COO, its SVP of Finance, its VP of Guest Experience, its VP of Engineering, its VP of Marketing and roughly half its other full-time employees, all within a period of months. Not last, imagine that the remaining cofounder, who is 20, has never before held a… Read More

Facebook finally reveals what its ‘dislike button’ will really look like

Facebook's 'Dislike Button'

It’s probably one of Facebook’s most requested features — a “dislike” button that lets users express an emotion other than “like”. News emerged last month that the social network was finally working on something along those lines, though Zuckerberg declined to share exactly what it would look like — but today things are becoming a whole lot clearer.

From tomorrow (October 9, 2015), some Facebook users in Ireland and Spain will start seeing “Reactions,” Facebook’s new emoji-based buttons that let users express feelings additional emotions to posts.

Facebook Reactions

Above: Facebook Reactions

To add a reaction, you simply press the “Like” button on the mobile app, or hover over the “Like” button on desktop to open the extra reaction buttons.

Facebook Reactions: Sad

Above: Facebook Reactions: Sad

This is in line with what Zuckerberg has said all along — a simple “Dislike” button is open to abuse and could create a negative atmosphere on the social network. This way, users can elect not only to “Like” something, but they can also love it, while also showing they found something amusing, sad, amazed, and angry.

Facebook Reactions: Love

Above: Facebook Reactions: Love


“People come to Facebook to share all kinds of things —  whether that’s updates that are happy, sad, funny or thought-provoking,” a Facebook spokesperson tells VentureBeat. “And we’ve heard you’d like more ways to celebrate, commiserate or laugh together. That’s why we are testing Reactions, an extension of the Like button, to give you more ways to share your reaction to a Facebook post in a quick and easy way.”

Facebook has been in need of more sentiment options for a while, and tomorrow it moves one step closer towards happening. There is no official word yet on when it will be launching more widely, but it’s typical of Facebook to slowly introduce new features such as this and tweak it based on feedback it receives from a smaller, more localized group.