Skip to content Skip to footer

Bridging DeepMind research with Alphabet products

[ad_1]

For today’s “Five minutes with” we caught up with Gemma Jennings, a product manager on the Applied team, who led a session on vision language models at the AI Summit – one of the world’s largest AI events for business.

At DeepMind…

I’m a part of the Applied team, which helps bring DeepMind technology to the outside world through Alphabet and Google products and solutions, like with WaveNet and Google Assistant, Maps, and Search. As a product manager, I act as a bridge between the two organisations, working very closely with both teams to understand the research and how people can use it. Ultimately, we want to be able to answer the question: How can we use this technology to improve the lives of people around the world?

I’m particularly excited about our portfolio of sustainability work. We’ve already helped reduce the amount of energy needed to cool Google’s data centres, but there’s much more we can do to have a bigger, transformative impact within sustainability.

Before DeepMind…

I worked at John Lewis Partnership, a UK department store that has a strong sense of purpose built into its DNA. I’ve always liked being part of a company with a sense of societal purpose, so DeepMind’s mission of solving intelligence to advance science and benefit humanity really resonated with me. I was intrigued to learn how that ethos would manifest within a research-led organisation – and within Google, one of the largest companies in the world. Adding this to my academic background in experimental psychology, neuroscience, and statistics, DeepMind ticked all the boxes.

The AI Summit…

Is my first in-person conference in almost three years, so I’m really keen to meet people in the same industry as myself and to hear what other organisations are working on.

I’m looking forward to attending a few talks from the quantum computing track to learn more about. It has the potential to drive the next big paradigm shift in computing power, unlocking new use cases for applying AI in the world and allowing us to work on larger, more complex problems.

My work involves a lot of deep learning methods and it’s always exciting to hear about the different ways people are using this technology. At the moment, these types of models require training on large amounts of data – which can be costly, time consuming, and resource intensive given the amount of computing needed. So where do we go from here? And what does the future of deep learning look like? These are the types of questions I’m looking to answer.

I presented…

Image Recognition Using Deep Neural Networks, our recently published research on vision language models (VLMs). For my presentation, I discussed recent advances in fusing large language models (LLMs) with powerful visual representations to progress the state of the art for image recognition.

This fascinating research has so many potential uses in the real world. It could, one day, act as an assistant to support classroom and informal learning in schools, or help people with blindness or low vision see the world around them, transforming their day-to-day lives.

I want people to leave the session…

With a better understanding of what happens after the research breakthrough is announced. There’s so much amazing research being done but we need to think about what comes next, like what global problems could we help solve? And how can we use our research to create products and services that have a purpose?

The future is bright and I’m excited to discover new ways of applying our groundbreaking research to help benefit millions of people around the world.

[ad_2]

Source link