Posted:


Entering text on mobile devices is still considered inconvenient by many; touchscreen keyboards, although much improved over the years, require a lot of attention to hit the right buttons. Voice input is an option, but there are situations where it is not feasible, such as in a noisy environment or during a meeting. Using handwriting as an input method can allow for natural and intuitive input method for text entry which complements typing and speech input methods. However, until recently there have been many languages where enabling this functionality presented significant challenges.

Today we launched Google Handwriting Input, which lets users handwrite text on their Android mobile device as an additional input method for any Android app. Google Handwriting Input supports 82 languages in 20 distinct scripts, and works with both printed and cursive writing input with or without a stylus. Beyond text input, it also provides a fun way to enter hundreds of emojis by drawing them (simply press and hold the ‘enter’ button to switch modes). Google Handwriting Input works with or without an Internet connection.
By building on large-scale language modeling, robust multi-language OCR, and incorporating large-scale neural-networks and approximate nearest neighbor search for character classification, Google Handwriting Input supports languages that can be challenging to type on a virtual keyboard. For example, keyboards for ideographic languages (such as Chinese) are often based on a particular dialect of the language, but if a user does not know that dialect, they may be hard to use. Additionally, keyboards for complex script languages (like many South Asian languages) are less standardized and may be unfamiliar. Even for languages where virtual keyboards are more widely used (like English or Spanish), some users find that handwriting is more intuitive, faster, and generally more comfortable.
Writing 'Hello' in Chinese, German, and Tamil.
Google Handwriting Input is the result of many years of research at Google. Initially, cloud based handwriting recognition supported the Translate Apps on Android and iOS, Mobile Search, and Google Input Tools (in Chrome, ChromeOS, Gmail and Docs, translate.google.com, and the Docs symbol picker). However, other products required recognizers to run directly on an Android device without an Internet connection. So we worked to make recognition models smaller and faster for use in Android handwriting input methods for Simplified and Traditional Chinese, Cantonese, and Hindi, as well as multi-language support in Gesture Search. Google Handwriting Input combines these efforts, allowing recognition both on-device and in the cloud (by tapping on the cloud icon) in any Android app.

You can install Google Handwriting Input from the Play Store here. More information and FAQs can be found here.

Posted:


Convolutional Neural Networks (CNNs) have recently shown rapid progress in advancing the state of the art of detecting and classifying objects in static images, automatically learning complex features in pictures without the need for manually annotated features. But what if one wanted not only to identify objects in static images, but also analyze what a video is about? After all, a video isn’t much more than a string of static images linked together in time.

As it turns out, video analysis provides even more information to the object detection and recognition task performed by CNN’s by adding a temporal component through which motion and other information can be also be used to improve classification. However, analyzing entire videos is challenging from a modeling perspective because one must model variable length videos with a fixed number of parameters. Not to mention that modeling variable length videos is computationally very intensive.

In Beyond Short Snippets: Deep Networks for Video Classification, to be presented at the 2015 Computer Vision and Pattern Recognition conference (CVPR 2015), we1 evaluated two approaches - feature pooling networks and recurrent neural networks (RNNs) - capable of modeling variable length videos with a fixed number of parameters while maintaining a low computational footprint. In doing so, we were able to not only show that learning a high level global description of the video’s temporal evolution is very important for accurate video classification, but that our best networks exhibited significant performance improvements over previously published results on the Sports 1 million dataset (Sports-1M).

In previous work, we employed 3D-convolutions (meaning convolutions over time and space) over short video clips - typically just a few seconds - to learn motion features from raw frames implicitly and then aggregate predictions at the video level. For purposes of video classification, the low level motion features were only marginally outperforming models in which no motion was modeled.

To understand why, consider the following two images which are very similar visually but obtain drastically different scores from a CNN model trained on static images:
Slight differences in object poses/context can change the predicted class/confidence of CNNs trained on static images.
Since each individual video frame forms only a small part of the video’s story, static frames and short video snippets (2-3 secs) use incomplete information and could easily confuse subtle fine-grained distinctions between classes (e.g: Tae Kwon Do vs. Systema) or use portions of the video irrelevant to the action of interest.

To get around this frame-by-frame confusion, we used feature pooling networks that independently process each frame and then pool/aggregate the frame-level features over the entire video at various stages. Another approach we took was to utilize an RNN (derived from Long Short Term Memory units) instead of feature pooling, allowing the network itself to decide which parts of the video are important for classification. By sharing parameters through time, both feature pooling and RNN architectures are able to maintain a constant number of parameters while capturing a global description of the video’s temporal evolution.

In order to feed the two aggregation approaches, we compute an image “pixel-based” CNN model, based on the raw pixels in the frames of a video. We processed videos for the “pixel-based” CNNs at one frame per second to reduce computational complexity. Of course, at this frame rate implicit motion information is lost.

To compensate, we incorporate explicit motion information in the form of optical flow - the apparent motion of objects across a camera's viewfinder due to the motion of the objects or the motion of the camera. We compute optical flow images over adjacent frames to learn an additional “optical flow” CNN model.
Left: Image used for the pixel-based CNN; Right: Dense optical flow image used for optical flow CNN
The pixel-based and optical flow based CNN model outputs are provided as inputs to both the RNN and pooling approaches described earlier. These two approaches then separately aggregate the frame-level predictions from each CNN model input, and average the results. This allows our video-level prediction to take advantage of both image information and motion information to accurately label videos of similar activities even when the visual content of those videos varies greatly.
Badminton (top 25 videos according to the max-pooling model). Our methods accurately label all 25 videos as badminton despite the variety of scenes in the various videos because they use the entire video’s context for prediction.
We conclude by observing that although very different in concept, the max-pooling and the recurrent neural network methods perform similarly when using both images and optical flow. Currently, these two architectures are the top performers on the Sports-1M dataset. The main difference between the two was that the RNN approach was more robust when using optical flow alone on this dataset. Check out a short video showing some example outputs from the deep convolutional networks presented in our paper.


1 Research carried out in collaboration with University of Maryland, College Park PhD student Joe Yue-Hei Ng and University of Texas at Austin PhD student Matthew Hausknecht, as part of a Google Software Engineering Internship

Posted:


Over the past couple of years, Google’s Course Builder has been used to create and deliver hundreds of online courses on a variety of subjects (from sustainable energy to comic books), making learning more scalable and accessible through open source technology. With the help of Course Builder, over a million students of all ages have learned something new.

Today, we’re increasing our commitment to Course Builder by bringing rich, new functionality to the platform with a new release. Of course, we will also continue to work with edX and others to contribute to the entire ecosystem.

This new version enables instructors and students to understand prerequisites and skills explicitly, introduces several improvements to the instructor experience, and even allows you to export data to Google BigQuery for in depth analysis.
  • Drag and drop, simplified tabs, and student feedback
We’ve made major enhancements to the instructor interface, such as simplifying the tabs and clarifying which part of the page you’re editing, so you can spend more time teaching and less time configuring. You can also structure your course on the fly by dragging and dropping elements directly in the outline.

Additionally, we’ve added the option to include a feedback box at the bottom of each lesson, making it easy for your students to tell you their thoughts (though we can't promise you'll always enjoy reading them).
  • Skill Mapping
You can now define prerequisites and skills learned for each lesson. For instance, in a course about arithmetic, addition might be a prerequisite for the lesson on multiplying numbers, while multiplication is a skill learned. Once an instructor has defined the skill relationships, they will have a consolidated view of all their skills and the lessons they appear in, such as this list for Power Searching with Google:
Instructors can then enable a skills widget that shows at the top of each lesson and which lets students see exactly what they should know before and after completing a lesson. Below are the prerequisites and goals for the Thinking More Deeply About Your Search lesson. A student can easily see what they should know beforehand and which lessons to explore next to learn more.
Skill maps help a student better understand which content is right for them. And, they lay the groundwork for our future forays into adaptive and personalized learning. Learn more about Course Builder skill maps in this video.
  • Analytics through BigQuery
One of the core tenets of Course Builder is that quality online learning requires a feedback loop between instructor and student, which is why we’ve always had a focus on providing rich analytical information about a course. But no matter how complete, sometimes the built-in reports just aren’t enough. So Course Builder now includes a pipeline to Google BigQuery, allowing course owners to issue super-fast queries in a SQL-like syntax using the processing power of Google’s infrastructure. This allows you to slice and dice the data in an infinite number of ways, giving you just the information you need to help your students and optimize your course. Watch these videos on configuring and sending data.

To get started with your own course, follow these simple instructions. Please let us know how you use these new features and what you’d like to see in Course Builder next. Need some inspiration? Check out our list of courses (and tell us when you launch yours).

Keep on learning!

Posted:


One of Google's goals is to surface successful strategies that support the expansion of high-quality Computer Science (CS) programs at the undergraduate level. Innovations in teaching and technologies, while additionally ensuring better engagement of women and underrepresented minority students, is necessary in creating inclusive, sustainable, and scalable educational programs.

To address issues arising from the dramatic increase in undergraduate CS enrollments, we recently launched the Computer Science Capacity Awards program. For this three-year program, select educational institutions were invited to contribute proposals for innovative, inclusive, and sustainable approaches to address current scaling issues in university CS educational programs.

Today, after an extensive proposal review process, we are pleased to announce the recipients of the Capacity Awards program:

Carnegie Mellon University - Professor Jacobo Carrasquel
Alternate Instructional Model for Introductory Computer Science Classes
CMU will develop a new instructional model consisting of two optional mini lectures per week given by the instructor, and problem-solving sessions with flexible group meetings that are coordinated by undergraduate and graduate teaching assistants.

Duke University - Professor Jeffrey Forbes
North Carolina State University - Professor Kristy Boyer
University of North Carolina - Professor Ketan Mayer-Patel
RESEARCH TRIANGLE PEER TEACHING FELLOWS: Scalable Evidence-Based Peer Teaching for Improving CS Capacity and Diversity
The project hopes to increase CS retention and diversity by developing a highly scalable, effective, evidence-based peer training program across three universities in the North Carolina Research Triangle.

Mount Holyoke College - Professor Heather Pon-Barry
MaGE (Megas and Gigas Educate): Growing Computer Science Capacity at Mount Holyoke College
Mount Holyoke’s MaGE program includes a plan to grow enrollment in introductory CS courses, particularly for women and other underrepresented groups. The program also includes a plan of action for CS students to educate, mentor, and support others in inclusive ways.

George Mason University - Professor Jeff Offutt
SPARC: Self-PAced Learning increases Retention and Capacity
George Mason University wants to replace the traditional course model for CS-1 and CS-2 with an innovative teaching model of self- paced introductory programming courses. Students will periodically demonstrate competency with practical skills demonstrations similar to those used in martial arts.

Rutgers University - Professor Andrew Tjang
Increasing the Scalability and Diversity in the Face of Large Growth in Computer Science Enrollment
Rutger’s program addresses scalability issues with technology tools, as well as collaborative spaces. It also emphasizes outreach to Rutgers’ women’s college and includes original research on success in CS programs to create new courses that cater to the changing environment.

University of California, Berkeley - Professor John DeNero
Scaling Computer Science through Targeted Engagement
Berkeley’s program plans to increase Software Engineering and UI Design enrollment by 500 total students/year, as well as increase the number of women and underrepresented minority CS majors by a factor of three.

Each of the selected schools brings a unique and innovative approach to addressing current scaling issues, and we are excited to collaborate in developing concrete strategies to develop sustainable and inclusive educational programs. Stay tuned over the coming year, where we will report on program recipients' progress and share results with the broader CS education community.

Posted:


Last year, Google and Tsinghua University hosted the 2014 APAC MOOC Focused Faculty Workshop, an event designed to share, brainstorm and generate ideas aimed at fostering MOOC innovation. As a result of the ideas generated at the workshop, we solicited proposals from the attendees for research collaborations that would advance important topics in MOOC development.

After expert reviews and committee discussions, we are pleased to announce the following recipients of the MOOC Focused Research Awards. These awards cover research exploring new interactions to enhance learning experience, personalized learning, online community building, interoperability of online learning platforms and education accessibility:

  • “MOOC Visual Analytics” - Michael Ginda, Indiana University, United States
  • “Improvement of students’ interaction in MOOCs using participative networks” - Pedro A. Pernías Peco, Universidad de Alicante, Spain
  • “Automated Analysis of MOOC Discussion Content to Support Personalised Learning” - Katrina Falkner, The University of Adelaide, Australia
  • “Extending the Offline Capability of Spoken Tutorial Methodology” - Kannan Moudgalya, Indian Institute of Technology Bombay, India
  • “Launching the Pan Pacific ISTP (Information Science and Technology Program) through MOOCs” - Yasushi Kodama, Hosei University, Japan
  • “Fostering Engagement and Social Learning with Incentive Schemes and Gamification Elements in MOOCs” - Thomas Schildhauer, Alexander von Humboldt Institute for Internet and Society, Germany
  • “Reusability Measurement and Social Community Analysis from MOOC Content Users” - Timothy K. Shih, National Central University, Taiwan

In order to further support these projects and foster collaboration, we have begun pairing the award recipients with Googlers pursuing online education research as well as product development teams.

Google is committed to supporting innovation in online learning at scale, and we congratulate the recipients of the MOOC Focused Research Awards. It is our belief that these collaborations will further develop the potential of online education, and we are very pleased to work with these researchers to jointly push the frontier of MOOCs.

Posted:


Computer scientists have dreamt of large-scale quantum computation since at least 1994 -- the hope is that quantum computers will be able to process certain calculations much more quickly than any classical computer, helping to solve problems ranging from complicated physics or chemistry simulations to solving optimization problems to accelerating machine learning tasks.

One of the primary challenges is that quantum memory elements (“qubits”) have always been too prone to errors. They’re fragile and easily disturbed -- any fluctuation or noise from their environment can introduce memory errors, rendering the computations useless. As it turns out, getting even just a small number of qubits together to repeatedly perform the required quantum logic operations and still be nearly error-free is just plain hard. But our team has been developing the quantum logic operations and qubit architectures to do just that.

In our paper “State preservation by repetitive error detection in a superconducting quantum circuit”, published in the journal Nature, we describe a superconducting quantum circuit with nine qubits where, for the first time, the qubits are able to detect and effectively protect each other from bit errors. This quantum error correction (QEC) can overcome memory errors by applying a carefully choreographed series of logic operations on the qubits to detect where errors have occurred.
Photograph of the device containing nine quantum bits (qubits). Each qubit interacts with its neighbors to protect them from error.

So how does QEC work? In a classical computer, we can monitor bits directly to detect errors. However, qubits are much more fickle -- measuring a qubit directly will collapse entanglement and superposition states, removing the quantum elements that make it useful for computation.

To get around this, we introduce additional ‘measurement’ qubits, and perform a series of quantum logic operations that look at the 'measurement' and 'data' qubits in combination. By looking at the state of these pairwise combinations (using quantum XOR gates), and performing some careful cross-checking, we can pull out just enough information to detect errors without altering the information in any individual qubit.
The basics of error correction. ‘Measurement’ qubits can detect errors on ‘data’ qubits through the use of quantum XOR gates.

We’ve also shown that storing information in five qubits works better than just storing it in one, and that with nine qubits the error correction works even better. That’s a key result -- it shows that the quantum logic operations are trustworthy enough that by adding more qubits, we can detect more complex errors that otherwise may cause algorithmic failure.

While the basic physical processes behind quantum error correction are feasible, many challenges remain, such as improving the logic operations behind error correction and testing protection from phase-flip errors. We’re excited to tackle these challenges on the way towards making real computations possible.

Posted:


Discovering new treatments for human diseases is an immensely complicated challenge; Even after extensive research to develop a biological understanding of a disease, an effective therapeutic that can improve the quality of life must still be found. This process often takes years of research, requiring the creation and testing of millions of drug-like compounds in an effort to find a just a few viable drug treatment candidates. These high-throughput screens are often automated in sophisticated labs and are expensive to perform.

Recently, deep learning with neural networks has been applied in virtual drug screening1,2,3, which attempts to replace or augment the high-throughput screening process with the use of computational methods in order to improve its speed and success rate.4 Traditionally, virtual drug screening has used only the experimental data from the particular disease being studied. However, as the volume of experimental drug screening data across many diseases continues to grow, several research groups have demonstrated that data from multiple diseases can be leveraged with multitask neural networks to improve the virtual screening effectiveness.

In collaboration with the Pande Lab at Stanford University, we’ve released a paper titled "Massively Multitask Networks for Drug Discovery", investigating how data from a variety of sources can be used to improve the accuracy of determining which chemical compounds would be effective drug treatments for a variety of diseases. In particular, we carefully quantified how the amount and diversity of screening data from a variety of diseases with very different biological processes can be used to improve the virtual drug screening predictions.

Using our large-scale neural network training system, we trained at a scale 18x larger than previous work with a total of 37.8M data points across more than 200 distinct biological processes. Because of our large scale, we were able to carefully probe the sensitivity of these models to a variety of changes in model structure and input data. In the paper, we examine not just the performance of the model but why it performs well and what we can expect for similar models in the future. The data in the paper represents more than 50M total CPU hours.
This graph shows a measure of prediction accuracy (ROC AUC is the area under the receiver operating characteristic curve) for virtual screening on a fixed set of 10 biological processes as more datasets are added.

One encouraging conclusion from this work is that our models are able to utilize data from many different experiments to increase prediction accuracy across many diseases. To our knowledge, this is the first time the effect of adding additional data has been quantified in this domain, and our results suggest that even more data could improve performance even further.

Machine learning at scale has significant potential to accelerate drug discovery and improve human health. We look forward to continued improvement in virtual drug screening and its increasing impact in the discovery process for future drugs.

Thank you to our other collaborators David Konerding (Google), Steven Kearnes (Stanford), and Vijay Pande (Stanford).

References:

1. Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Jörg Kurt Wegner, Hugo Ceulemans, Sepp Hochreiter. Deep Learning as an Opportunity in Virtual Screening. Deep Learning and Representation Learning Workshop: NIPS 2014

2. Dahl, George E, Jaitly, Navdeep, and Salakhutdinov, Ruslan. Multi-task neural networks for QSAR predictions. arXiv preprint arXiv:1406.1231, 2014.

3. Ma, Junshui, Sheridan, Robert P, Liaw, Andy, Dahl, George, and Svetnik, Vladimir. Deep neural nets as a method for quantitative structure-activity relationships. Journal of Chemical Information and Modeling, 2015.

4. Peter Ripphausen, Britta Nisius, Lisa Peltason, and Jürgen Bajorath. Quo Vadis, Virtual Screening? A Comprehensive Survey of Prospective Applications. Journal of Medicinal Chemistry 2010 53 (24), 8461-8467