The Pentagon is one of the largest technology customers in the world, purchasing everything from F-35 planes (roughly $90 million each) to cloud services (the JEDI contract was $10 billion). Despite outlaying hundreds of billions of dollars for acquisitions though, the Defense Department has struggled to push nascent technologies from startups through its punishing procurement process.
The department launched the Defense Innovation Unit a few years back as a way to connect startups into the defense world. Now, the military has decided to work even earlier to ensure that the next generation of startups can equip the military with the latest technology.
Cambridge, MA-based MIT and the U.S. Air Force announced today that they are teaming up to launch a new accelerator focused on artificial intelligence applications, with the Air Force committed to investing $15 million into roughly ten MIT research projects per year. The accelerator will be called the MIT-Air Force AI Accelerator (clearly, the Pentagon hasn’t gotten better at naming things).
This will not be the Air Force’s first foray into accelerators. The service also built out an accelerator with TechStars that is directly targeted at solving the Air Force’s problems. It’s not yet clear whether the TechStars accelerator, which is also based in Boston, is being merged into the MIT accelerator or will remain a separate entity.
While MIT has had close relationships with the military going back decades, concerns have increased among some technologists about working on frontier tech like artificial intelligence and drones within a military context, especially an offensive military context. Last year, employees at Google blocked the tech giant from signing a cloud agreement with the Pentagon related to Project Maven, which would have applied AI and “algorithms” to battlefield applications.
In the announcement for this accelerator, MIT said that, “In addition to disaster relief and medical readiness, other possible research areas may include data management, maintenance and logistics, vehicle safety, and cyber resiliency.” It also highlighted that it hoped the projects entering the accelerator would be “addressing challenges that are important to both the Air Force and society more broadly.” Whether there are any limits on the types of projects that will be allowed on campus is not clear.
Microsoft is investing in certification and training for a range of A.I.-related skills in partnership with education provider General Assembly, the companies announced this morning. The goal is to train some 15,000 people by 2022 in order to increase the pool of A.I. talent around the world. The training will focus on A.I., machine learning, data science, cloud and data engineering and more.
In the new program’s first year, Microsoft will focus on training 2,000 workers to transition to a A.I. and machine learning role. And over the full three years, it will train an additional 13,000 workers with A.I.-related skills.
As part of this effort, Microsoft is joining General Assembly’s new A.I. Standards Board along with other companies. Over the next six months, the Board will help to define A.I. skills standards, develop assessments, design a career framework, and create credentials for A.I. skills.
The training developed will also focus on filing the A.I. jobs currently available where Microsoft technologies are involved. As Microsoft notes, many workers today are not skilled enough for roles involving the use of Azure in aerospace, manufacturing and elsewhere. The training, it says, will focus on serving the needs of its customers who are looking to employ A.I. talent.
This will also include the creation of an A.I. Talent Network that will source candidates for long-term employment as well as contract work. General Assembly will assist with this effort by connecting its 22 campuses and the broader Adecco ecosystem to this jobs pipeline. (GA sold to staffing firm Adecco last year for $413 million.)
Microsoft cited the potential for A.I.’s impact on job creation as a reason behind the program, noting that up to 133 million new roles may be created by 2022 as a result of the new technologies. Of course, it’s also very much about making sure its own software and cloud customers can find people who are capable of working with its products, like Azure.
“As a technology company committed to driving innovation, we have a responsibility to help workers access the AI training they need to ensure they thrive in the workplace of today and tomorrow,” said Jean-Philippe Courtois, executive vice president and president of Global Sales, Marketing and Operations at Microsoft, in a statement. “We are thrilled to combine our industry and technical expertise with General Assembly to help close the skills gap and ensure businesses can maximize their potential in our AI-driven economy.”
Children with vision impairments struggle to get a solid K-12 education for a lot of reasons — so the more tools their teachers have to impart basic skills and concepts, the better. ObjectiveEd is a startup that aims to empower teachers and kids with a suite of learning games accessible to all vision levels, along with tools to track and promote progress.
Some of the reasons why vision-impaired kids don’t get the education they deserve are obvious, for example that reading and writing are slower and more difficult for them than for sighted kids. But other reasons are less obvious, for example that teachers have limited time and resources to dedicate to these special needs students when their overcrowded classrooms are already demanding more than they can provide.
Technology isn’t the solution, but it has to be part of the solution, because technology is so empowering and kids take to it naturally. There’s no reason a blind 8-year-old can’t also be a digital native like her peers, and that presents an opportunity for teachers and parents both.
This opportunity is being pursued by Marty Schultz, who has spent the last few years as head of a company that makes games targeted at the visually-impaired audience, and in the process saw the potential for adapting that work for more directly educational purposes.
“Children don’t like studying and don’t like doing their homework,” he told me. “They just want to play video games.”
It’s hard to argue with that. True of many adults too for that matter. But as Schultz points out, this is something educators have realized in recent years and turned to everyone’s benefit.
“Almost all regular education teachers use educational digital games in their classrooms and about 20 percent use it every day,” he explained. “Most teachers report an increase in student engagement when using educational video games. Gamification works because students own their learning. They have the freedom to fail, and try again, until they succeed. By doing this, students discover intrinsic motivation and learn without realizing it.”
Having learned to type, point and click, do geometry and identify countries via games, I’m a product of this same process and many of you likely are as well. It’s a great way for kids to teach themselves. But how many of those games would be playable by a kid with vision impairment or blindness? Practically none.
It turns out that these kids, like others with disabilities, are frequently left behind as the rising technology tide lifts everyone else’s boats. The fact is it’s difficult and time consuming to create accessible games that target things like Braille literacy and blind navigation of rooms and streets, so developers haven’t been able to do so profitably and teachers are left to themselves to figure out how to jury-rig existing resources or, more likely, fall back on tried and true methods like printed worksheets, in-person instruction, and spoken testing.
And since teacher time is limited and instructors trained in vision impaired learning are thin on the ground, these outdated methods are also difficult to cater to an individual student’s needs. For example a kid may be great at math but lack directionality skills. You need to draw up an “individual education plan” (IEP) explaining (among other things) this and what steps need to be taken to improve, then track those improvements. It’s time-consuming and hard! The idea behind ObjectiveEd is to create both games that teach these basic skills and a platform to track and document progress as well as adjust the lessons to the individual.
How this might work can be seen in a game like Barnyard, which like all of ObjectiveEd’s games has been designed to be playable by blind, low vision, or fully sighted kids. The game has the student finding an animal in a big pen, then dragging it in a specified direction. The easiest levels might be left and right, then move on to cardinal directions, then up to clock directions or even degrees.
“If the IEP objective is ‘Child will understand left versus right and succeed at performing this task 90 percent of the time,’ the teacher will first introduce these concepts and work with the child during their weekly session,” Schultz said. That’s the kind of hands-on instruction they already get. “The child plays Barnyard in school and at home, swiping left and right, winning points and getting encouragement, all week long. The dashboard shows how much time each child is playing, how often, and their level of success.”
That’s great for documentation for the mandated IEP paperwork, and difficulty can be changed on the fly as well:
“The teacher can set the game to get harder or faster automatically, or move onto the next level of complexity automatically (such as never repeating the prompt when the child hesitates). Or the teacher can maintain the child at the current level and advance the child when she thinks it’s appropriate.”
This isn’t meant to be a full-on K-12 education in a tablet app. But it helps close the gap between kids who can play Mavis Beacon or whatever on school computers and vision-impaired kids who can’t.
Importantly, the platform is not being developed without expert help — or, as is actually very important, without a business plan.
“We’ve developed relationships with several schools for the blind as well as leaders in the community to build educational games that tackle important skills,” Schultz said. “We work with both university researchers and experienced Teachers of Visually Impaired students, and Certified Orientation and Mobility specialists. We were surprised at how many different skills and curriculum subjects that teachers really need.”
Based on their suggestions, for instance, the company has built two games to teach iPhone gestures and the accessibility VoiceOver rotor. This may be a proprietary technology from Apple but it’s something these kids need to know how to use, just like they need to know how to run a Google search, use a mouse without being able to see the screen, and other common computing tasks. Why not learn it in a game like the other stuff?
Making technological advances is all well and good, but doing so while building a sustainable business is another thing many education startups have failed to address. Fortunately, public school systems actually have significant money set aside specifically for students with special needs, and products that improve education outcomes are actively sought and paid for. These state and federal funds can’t be siphoned off to use on the rest of the class so if there’s nothing to spend them on, they go unused.
ObjectiveEd has the benefit of being easily deployed without much specialty hardware or software. It runs on iPads, which are fairly common in schools and homes, and the dashboard is a simple web one. Although it may eventually interface with specialty hardware like Braille readers, it’s not necessary for many of the games and lessons, so that lowers the deployment bar as well.
The plan for now is to finalize and test the interface and build out the games library — ObjectiveEd isn’t quite ready to launch, but it’s important to build it with constant feedback from students, teachers, and experts. With luck in a year or two the visually-impaired youngsters at a school near you might have a fun new platform to learn and play with.
“ObjectiveEd exists to help teachers, parents and schools adapt to this new era of gamified learning for students with disabilities, starting with blind and visually impaired students,” Schultz said. “We firmly believe that well-designed software combined with ‘off-the-shelf’ technology makes all this possible. The low cost of technology has truly revolutionized the possibilities for improving education.”
Kids need a good education to have the best chance of succeeding in the world, but in distant parts of developing countries that may be neither schools nor teachers. The Global Learning Xprize aimed to spur innovation in the tech space to create app-based teaching those kids can do on their own — and a tie means the $10 million grand prize gets split in two.
The winners, Onebillion and Kitkit School, both created tablet apps that resulted in serious gains to literacy rates in the areas they were deployed. Each receives $5M, in addition to the $1M they got for being a finalist. Elon Musk and Xprize co-founder Anousheh Ansari were in attendance to congratulate the winners.
Funded by a number of sponsors including Elon Musk, the prize started way back in 2014. Overseen at first by Matt Keller (previously at the famous but sadly unsuccessful One Laptop Per Child program), and later by Emily Musil Church, the prize asked entrants to create free, open-source software that kids could use to teach themselves basic reading, writing, and arithmetic.
After soliciting teams and doing some internal winnowing of the herd, a set of five finalists was arrived at: CCI, Chimple, Kitkit School, Onebillion, and Robotutors. They came from a variety of locations and backgrounds, and as mentioned all received a $1M prize for getting to this stage.
These finalists were then subjected to field testing in Tanzania, where 8,000 Pixel C tablets generously donated by Google for the purpose were distributed to communities where teaching was hardest to come by and literacy rates lowest.
Among the participating kids, only about a quarter attended school, and only one in ten could read a single world in Swahili. By the end of the 15-month field test, 30 percent of the kids could read a complete sentence — results were even better among girls.
I asked about the field test process itself. Church, who led the prize project, gave a detailed answer that shows how closely the organization worked with local communities:
The field test was a very unique and complex operation – the field test included nearly 2,700 children and 170 villages in some of the most remote parts of Tanzania over the course of 15 months. XPRIZE worked closely with its partners on the ground to implement this unique 15-month field test – UNESCO, World Food Programme, and the Government of Tanzania. In total that required over 300 staff members in Tanzania from all levels – from the regional educational officials to village mamas — women from each village who have been empowered to ensure the smooth functioning of the test. This was truly a ground-up, community-driven operation. Logistically, this required identifying and sensitizing communities, conducting baseline and endline assessment of all the children prior to tablet distribution, installing solar charging stations in all of these villages for the tablets, and physical data collection and tablet distribution by our heroic Field Assistants on motorbikes (just to name a few of the critical activities).
Once the tablets were in the hands of the children – the general approach was to be very “hands-off” as we wanted to see whether or not the software itself was leading to learning gains. We instead relied on village mamas to create a safe environment in which a child can use the tablet when they chose to. In short – we realize that in order for this work to scale globally – hands-on instruction is hard to do.
The winning teams had similar approaches: gamify the content and make it approachable for any age or ability level. Rural Tanzania isn’t hurting literacy-wise because of a lack of worksheets. If these kids are going to learn, it needs to be engaging — like anywhere else, they learn best when they don’t realize they’re being taught.
Onebillion’s approach was to create a single but flexible long course that takes kids from absolutely zero reading knowledge to basic competency. “Onecourse is made of thousands of learning units, some could be on reading activities, some could be on numeracy activities — it’s a modular course, it’s built around the child’s day and adapts to their needs,” explained the company’s CTO, Jamie Stuart in a video about the team.
“When the child is not yet at a stage when they can read, the story can be played back to the child a bit like an audio book. When the child starts to be able to decode words we can offer them assistance, and then later on they can attempt to read the story by themselves.”
Kitkit School came from Sooinn Lee and her husband, both game developers (and plenty of others, of course). She points out that games are fundamentally built around the idea of keeping the player engaged. “Sometimes in education software, I see there is software too much focused on what to deliver and what is the curriculum, rather than how a child will feel during this learning experience,” she said in her team video.
“We create gamified learning with a mixture of high quality graphics, sound, interactions, so a child will feel they’re doing a really fun activity, and they don’t care if they’re learning or not, because it feels so good.”
All the finalists were on the ground in these communities working with the kids, so this wasn’t just an fire and forget situation. And if we’re honest, that may account partially for the gains shown by these kids.
After all, the main issue is a lack of resources, and while the tablets and curricula are a good way to bring learning to the kids, what matters most is that someone is bringing it at all. That said, pre-built fun learning experiences like this that can run on rugged, easily distributed hardware are definitely powerful tools to start with.
As for the communities involved — they won’t be left high and dry now that the testing is over. Church told me that there are plans to make the apps part of Tanzania’s education system:
Our UN partners on the ground (UNESCO and WFP) have worked hand-in-hand with the Government of Tanzania to develop a plan regarding how to continue to use the software (deployed in Tanzania as part of this project), the tablets in the project, and the solar stations installed. This plan will be implemented by the Government of Tanzania in late June in conjunction with UNESCO and WFP. Part of this plan is to get the content in all five of the applications approved to be part of the formal education system in Tanzania, so it can be integrated. We laud the foresight of Tanzania to see the value in tablet-driven learning as a way to reach all children.
And the devices themselves will stay put, or even be replaced. “The staff on the ground will work with the communities to ensure each child as part of this project receives up-to-date software and a new tablet,” Church wrote. “In addition our partners are actively working with communities to teach them how to maintain and continue to use the solar stations in their villages beyond this project.”
Not every needy kid has a rich western organization to drop a state-of-the-art tablet in their hands. But this is just the start of something larger — here’s hoping programs like this one will grow to encompass not just Africa but anywhere, including the U.S., where disadvantaged kids need a hand with the basics.
Rivet, a new app from Google’s in-house incubator, wants to help children struggling to read. The app hails from Area 120 — Google’s workshop for experimental projects — and includes over 2,000 free books for kids as well as an in-app assistant that can help kids when they get stuck on a word by way of advanced speech technology.
For example, if the child is having difficulties with a word they can tap it to hear it pronounced or they can say it themselves out loud to be shown in the app which parts were said correctly and which need work.
There are also definitions and translations for over 25 languages included in the app, in order to help kids — and especially non-native speakers — to better learn reading.
For younger readers, there’s a follow-along mode where the app will read the stories aloud with the words highlighted so the child can match up the words and sounds. When kids grow beyond needing this feature, parents can opt to disable follow-along mode so the kids have to read for themselves.
While there are a number of e-book reading apps aimed at kids on the market today, Rivet is interesting for its ability to leverage advances in voice technology and speech processing.
Starting today on Android and (soon) iOS, Rivet will be able to offer real-time help to kids when they tap the microphone button and read the page aloud. If the child hits a word and starts to struggle, the assistant will proactively jump in and offer support. This is similar to how parents help children to read — as the child reaches a word they don’t know or can’t say, the parent typically corrects them.
Rivet says all the speech processing takes place on the device to protect children’s privacy and its app is COPPA-compliant.
When the child completes a page, they can see which words they read correctly, and which they still need to work on. The app also doles out awards by way of points and badges, and personalizes the experience using avatars, themes and books customized to the child’s interests and reading level.
Other surprises and games keep kids engaged with the app and continuing to read.
According to Rivet’s Head of Tech and Product Ben Turtel, the team wanted to work on reading because it’s a fundamental skill — and one that needs to be mastered to learn just about everything else.
“Struggling readers,” he says, “are unlikely to catch up and four times less likely to graduate from high school. Unfortunately, 64 percent of fourth-grade students in the United States perform below the proficient level in reading,” Turtel explains.
Rivet is not the first app from Google aimed at tackling reading. An app called Bolo offers a similar feature set, but is aimed at kids in India, instead.
Technology is very much in the business of, quite literally, changing the world. When I was deciding whether to write for TechCrunch, I tried to imagine a human life on this planet, in 20 or 30 years, that would not have been dramatically impacted in one way or another by the new technologies we’re creating today.
I couldn’t picture such a person, so I decided this ongoing series on tech ethics was the right thing to do with my time.
Her work critically analyzing the politics of the science world seems particularly salient to the tech world generally and to the world of tech ethics in particular: for example, Stanford recently launched an enormous new initiative in ethical technology, the “Institute for Human-Centered Artificial Intelligence (HAI),” which boasts plans for a billion dollar investment in making AI “representative of humanity.”
Yet among the 121 HAI faculty members initially announced this March, it has been much discussed that the overwhelming majority were white, most were male, and not a single one was Black. In part one of my dialogue with Prescod-Weinstein, we discussed decolonization and intersectionality; here I’ll begin by asking her about inclusion.
Greg E.: How is the science world doing, in terms of creating an inclusive culture?
Chanda P.W.: I’ve got mixed feelings about the question of inclusion. We need to ask ourselves what our aims are and where we’re going with this. I don’t necessarily think that tokens solve the problem.
Microsoft’s yearly Imagine Cup student startup competition crowned its latest winner today: EasyGlucose, a non-invasive, smartphone-based method for diabetics to test their blood glucose. It and the two other similarly beneficial finalists presented today at Microsoft’s Build developers conference.
The Imagine Cup brings together winners of many local student competitions around the world with a focus on social good and, of course, Microsoft services like Azure. Last year’s winner was a smart prosthetic forearm that uses a camera in the palm to identify the object it is meant to grasp. (They were on hand today as well, with an improved prototype.)
The three finalists hailed from the U.K., India, and the U.S.; EasyGlucose was a one-person team from my alma mater UCLA.
EasyGlucose takes advantage of machine learning’s knack for spotting the signal in noisy data, in this case the tiny details of the eye’s iris. It turns out, as creator Brian Chiang explained in his presentation, that the iris’s “ridges, crypts, and furrows” hide tiny hints as to their owner’s blood glucose levels.
EasyGlucose presents at the Imagine Cup finals.
These features aren’t the kind of thing you can see with the naked eye (or rather, on the naked eye), but by clipping a macro lens onto a smartphone camera Chiang was able to get a clear enough image that his computer vision algorithms were able to analyze them.
The resulting blood glucose measurement is significantly better than any non-invasive measure and more than good enough to serve in place of the most common method used by diabetics: stabbing themselves with a needle every couple hours. Currently EasyGlucose gets within 7 percent of the pinprick method, well above what’s needed for “clinical accuracy,” and Chiang is working on closing that gap. No doubt this innovation will be welcomed warmly by the community, as well as the low cost: $10 for the lens adapter, and $20 per month for continued support via the app.
It’s not a home run, or not just yet: Naturally, a technology like this can’t go straight from the lab (or in this case the dorm) to global deployment. It needs FDA approval first, though it likely won’t have as protracted a review period as, say, a new cancer treatment or surgical device. In the meantime, EasyGlucose has a patent pending, so no one can eat its lunch while it navigates the red tape.
As the winner, Chiang gets $100,000, plus $50,000 in Azure credit, plus the coveted one-on-one mentoring session with Microsoft CEO Satya Nadella.
The other two Imagine Cup finalists also used computer vision (among other things) in service of social good.
Caeli is taking on the issue of air pollution by producing custom high-performance air filter masks intended for people with chronic respiratory conditions who have to live in polluted areas. This is a serious problem in many places that cheap or off-the-shelf filters can’t really solve.
It uses your phone’s front-facing camera to scan your face and pick the mask shape that makes the best seal against your face. What’s the point of a high-tech filter if the unwanted particles just creep in the sides?
Part of the mask is a custom-designed compact nebulizer for anyone who needs medication delivered in mist form, for example someone with asthma. The medicine is delivered automatically according to the dosage and schedule set in the app — which also tracks pollution levels in the area so the user can avoid hot zones.
Finderr is an interesting solution to the problem of visually impaired people being unable to find items they’ve left around their home. By using a custom camera and computer vision algorithm, the service watches the home and tracks the placement of everyday items: keys, bags, groceries, and so on. Just don’t lose your phone, since you’ll need that to find the other stuff.
You call up the app and tell it (by speaking) what you’re looking for, then the phone’s camera it determines your location relative to the item you’re looking for, giving you audio feedback that guides you to it in a sort of “getting warmer” style, and a big visual indicator for those who can see it.
After their presentations, I asked the creators a few questions about upcoming challenges, since as is usual in the Imagine Cup, these companies are extremely early stage.
Right now EasyGlucose is working well but Chiang emphasized that the model still needs lots more data and testing across multiple demographics. It’s trained on 15,000 eye images but many more will be necessary to get the kind of data they’ll need to present to the FDA.
Finderrr recognizes all the images in the widely used ImageNet database, but the team’s Ferdinand Loesch pointed out that others can be added very easily with 100 images to train with. As for the upfront cost, the U.K. offers a 500-pound grant to visually-impaired people for this sort of thing, and they engineered the 360-degree ceiling-mounted camera to minimize the number needed to cover the home.
Caeli noted that the nebulizer, which really is a medical device in its own right, is capable of being sold and promoted on its own, perhaps licensed to medical device manufacturers. There are other smart masks coming out, but he had a pretty low opinion of them (not strange in a competitor but there isn’t some big market leader they need to dethrone). He also pointed out that in the target market of India (from which they plan to expand later) isn’t as difficult to get insurance to cover this kind of thing.
While these are early-stage companies, they aren’t hobbies — though admittedly many of their founders are working on them between classes. I wouldn’t be surprised to hear more about them and others from Imagine Cup pulling in funding and hiring in the next year.
Chanda Prescod-Weinstein is Assistant Professor of Physics and Astronomy and a Core Faculty Member in Women’s Studies at the University of New Hampshire. She is the lead “axion wrangler” and a social media team member for the NASA STROBE-X Probe Concept Study.
The first Black woman in history to hold a faculty position in theoretical cosmology, Prescod-Weinstein is also a Twitter activist who frequently goes viral, a prolific writer and editor in multiple genres and disciplines, and the author of a soon to come column in the New Scientist, and a 2021 book, The Disordered Cosmos: from Dark Matter to Black Lives Matter.
A millennial, she is at the vanguard of a new cohort of brilliant, young, tech-savvy academics who are conducting important research in science and technology while also gracefully shouldering the responsibility of helping transform the way many of us think about what it means to be a scientist or technologist and who we think of when we imagine those categories.
Why interview a theoretical cosmologist for this series on tech ethics? Because tech, like science, has much work to do in reckoning with issues of race, gender, inclusion, and intersectionality.
As I spoke with her recently, I pictured young women and men of color or other marginalized backgrounds, looking to find their own place in the extraordinary world that is our tech culture/industry (I call tech a religion to underscore its size and influence, but more on that in some other column) and wondering if a) they will be given a just and equitable opportunity to demonstrate their innate abilities; and b) if in their quest to “make it” in this world they will have to somehow ‘sell out.’
Prescod-Weinstein tells the story, below, of a profound ethical dilemma she faced at the very beginning of her career in science.
Prescod-Weinstein quoted Daniel Berrigan, about whom she first read in an Adrienne Rich poem about the“Catonsville Nine,” a group of anti-war activists who, in 1968, took hundreds of draft files in wire baskets to the parking lot of the draft board in Catonsville, MD. Berrigan, his brother and fellow Catholic priest Phillip, and their seven colleagues dumped the files out, doused them in homemade napalm, and set them on fire.
Berrigan later explained he was inspired to take such dramatic action, rather than merely talking about ethics, because he believed that mere talk would place him “in danger of verbalizing my moral impulses out of existence.”
In Prescod-Weinstein’s story and in her reference to Berrigan, we can find a parable about the need for inclusion and justice in today’s tech world. When we talk about tech ethics, after all, are we talking mainly about having yet more academic discussions about self-regulation or even incremental government policy changes? Or will we eventually need to grapple with burning issues to which we can only respond meaningfully with hard choices or dramatic actions?
What we all make of this, and of several of other ethical questions raised in the conversation below, will determine so much about the future of ethics in tech.
Greg E.: You have been playing a prominent role in facilitating conversations about justice, inclusion, and intersectionality in the science world. I wanted to speak with you about your activism because it seems to me discussions are also needed in the tech world, but seem to be happening even less in tech. What do you think?
Indian edtech startup CollegeDekho, which helps students connect with prospective colleges and keep track of exams, has raised $8 million in Series B round.
The new financing round for the four-year-old Gurgaon-based startup was led by its parent company Girnarsoft Education and London-based private equity investor Man Capital, which also participated in the startup’s Series A round last year.
Ruchir Arora, founder and CEO of CollegeDekho told TechCrunch in an interview that the startup will use the capital to expand its presence in more schools and also begin connecting students with international educational institutions. The startup, which has raised $13 million to date, will also ramp up its research and development efforts.
CollegeDekho, hindi for search for college, maintains a website that helps students identify the right career choices for them. The website has a chatbot that answers some of the questions students have while logging their responses and other website activities such as the kind of colleges they are searching for on the platform, their preferred location and budget.
Arora said the startup, which also has about 3,000 call centre representatives and counsellors, builds profiles of students to make college recommendations. He said each month the site observes more than five million sessions from students. Last year, more than 8,000 students used CollegeDekho to take admission in a college.
Parents in India, a country of 1.3 billion people with not the best literacy record, see education as an upward mobility for their children. Each year, more than six to seven million students go to a college. But because of a range of factors that can include cultural stigma, many students end up choosing wrong path and thus don’t excel in college. Indeed, many students ultimately don’t pursue the subject they are best suited for, Arora said, and that’s where CollegeDekho aims to make an impact.
Most high school students in India often gravitate toward engineering or medical college, as a result of which, each year India produces many engineers and doctors who struggle for years to find a job. Arora said his startup looks at more than 2,000 career paths a student could pursue.
What works in favor of Arora is that the country will continue to turn out millions of students each year who will be looking to go to a college soon. It also helps that CollegeDekho is operationally profitable, Arora said, adding that it generates about $3.2 million in revenue in a year. Any additional cash that the startup raises will go into its expansion, he said.
CollegeDekho charges a nominal fee from students, and also takes a cut when they join a college. More than 36,000 educational institutes are listed on CollegeDekho. The startup also works with more than 400 colleges to conduct an exam for direct admission, and there too it earns a cut.
India’s education market, estimated to be grow to $5.7 billion by next year, has emerged as a lucrative opportunity for startups and VCs alike. Bangalore-based Byju’s, which helps millions of students in India prepare for competitive exams, raised $540 million from Naspers and others late last year. Unacademy, which like Byju’s offers online tutoring to students, has raised more than $38.5 million to date.
A legion of other education startups today are vying for the attention of students in the nation. Noida-based AskIITians, not much far from the offices of CollegeDekho, aims to help school-going students prepare for medical and engineering exams. Extramarks, also based in Noida, operates in the same space as AskIITians. Reliance Industries, owned and controlled by India’s richest man Mukesh Ambani, bought 38.5 percent stake in the startup three years ago.
No one does brand synergy quite like Lego. The company’s been one of the biggest Star Wars licensers over the years, and for the first time, it’s applying one of the most valuable pieces of IP to its own line of STEM kits, Lego Boost.
For this year’s Star Wars Day, the Danish company is announcing the arrival of the Lego Star Wars Boost Droid Commander set, which uses the underlying educational property as a jumping off point to build a trio of classic robots from the series.
Kids can use the kit to build R2-D2, Gonk and the Mouse Droid, commanding them on 40 different missions, while learning to build and code in the process. It looks like an effective way to disguise the whole learning bit behind one of history’s most beloved film series. It’s similar to the approach taken by littleBits’ Droid Inventor Kit a few years back — though Lego’s got a pretty remarkable track record of using Star Wars IP.
The set includes 1,177 pieces coupled with color and distance sensors and an interactive motor to get the robots moving. All of that works with a new Boost Star Wars app, which will be available for Android, iOS and Fire devices, featuring such missions as assisting an X-wing flight and seeking those sneaky rebels.
The system arrives on September 1, timed for the release of Star Wars: The Rise of Skywalker, the final film in the sequel trilogy.