Technology The best minds in machine learning predict where AI...

The best minds in machine learning predict where AI will go in 2020


AI is no longer ready to change the world one day; The world is changing now. As we begin a new year and a decade, VentureBeat turned to some of the most enthusiastic minds in AI to review the progress made in 2019 and see how machine learning will mature in 2020. We speak with the creator of PyTorch, Soumith Chintala, Professor at the University of California Celeste. Kidd, Google AI chief Jeff Dean, Nvidia's machine learning research director, Anima Anandkumar, and IBM research director Dario Gil.

Everyone always has predictions for next year, but these are people who are shaping the future today: individuals with authority in the AI ​​community who treasure scientific research and whose records have earned them credibility. While some predict advances in subfields such as semi-supervised learning and the neural symbolic approach, virtually all of the ML luminaries that VentureBeat spoke with agree that great advances were made in Transformer-based natural language models in 2019 and expect a continuing controversy over technology such as facial recognition. They also want to see the field of AI grow so that it has more value than precision.

If you're interested in taking a look back, last year we talked to people like Facebook AI Research Scientist Yann LeCun, founder Andrew Ng, and Accenture's global head of artificial intelligence, Rumman Chowdhury .

Soumith Chintala

Director, principal engineer and creator of PyTorch

Depending on how you measure it, PyTorch is the most popular machine learning framework in the world today. Derived from the Torch open source framework introduced in 2002, PyTorch was available in 2015 and is constantly growing in extensions and libraries.

This fall, Facebook launched PyTorch 1.3 with quantification and TPU support, along with Captum, a deep learning interpretation tool, and PyTorch Mobile. There are also things like PyRobot and PyTorch Hub to share code and encourage ML professionals to adopt reproducibility.

In a conversation with VentureBeat this fall at PyTorch Dev Con, Chintala said he saw few significant advances in machine learning in 2019.

"I don't really think we had anything innovative … since Transformer basically. We had ConvNets in 2012 that reached prime time, and Transformer in 2017 or something. That's my personal opinion," he said.

He then called DeepMind pioneer AlphaGo in his contributions to reinforcement learning, but said the results are difficult to implement for practical tasks in the real world.

Chintala also believes that the evolution of machine learning frameworks such as Google's PyTorch and TensorFlow, the favorites among ML professionals today, have changed the way researchers explore ideas and do their job.

"That has been a breakthrough in the sense that it is making them move one or two orders of magnitude faster than they used to," he said.

This year, Google and Facebook open source frameworks introduced quantization to increase model training speeds. In the next few years, Chintala expects "an explosion" in the importance and adoption of tools such as PyTorch's JIT compiler and neural network hardware accelerators such as Glow.

"With PyTorch and TensorFlow, you have seen that the frames converge. The reason why quantification arises, and many other low-level efficiencies arise, is because the next war is compilers of the frames: XLA, TVM, PyTorch has Glow, a lot of innovation is waiting to happen, "he said." Over the next few years, you'll see … how to quantify more intelligently, how to merge better, how to use GPUs more efficiently (and) how to automatically compile the new hardware. "

Like most other industry leaders with whom VentureBeat spoke for this article, Chintala predicts that the AI ​​community will give more value to the performance of the AI ​​model beyond accuracy in 2020 and will begin to pay attention to other factors. important, such as the amount of energy needed to create a model, how production can be explained to humans and how AI can better reflect the type of society that people want to build.

"If you think about the last five, six years, we have focused on precision and raw numbers like" Is Nvidia's model more accurate? Is the Facebook model more accurate? & # 39; ”he said. "In fact, I think 2020 will be the year we start thinking (in a more complex way), where it doesn't matter if your model is 3% more accurate if … it doesn't have a good interoperability mechanism (or complies with other criteria) ".

Celestial Kidd

Developmental psychologist at the University of California, Berkeley.

Celeste Kidd is the director of Kidd Lab at the University of California, Berkeley, where she and her team explore how children learn. Their ideas can help the creators of neural networks who try to train models in ways not very different from raising a child.

"Human babies do not receive tagged data sets, however, they are handled well and it is important that we understand how that happens," he said.

One thing that surprised Kidd in 2019 is the number of neural network creators who casually belittle their own work, or that of other researchers, as unable to do something a baby can do.

When he averages the baby's behavior, he said, there is evidence that they understand some things, but they are definitely not perfect learners, and that kind of conversation paints an overly optimistic picture of what babies can do.

"Human babies are great, but they make many mistakes and many of the comparisons I saw people making casually to idealize the baby's behavior at the population level," he said. "I think there is likely to be a greater appreciation for the connection between what you currently know and what you want to understand next."

In AI, the phrase "black box" has been around for years. It is used to criticize the lack of explainability of neural networks, but Kidd believes that 2020 may mean the end of the perception that neural networks are uninterpretable.

"The argument of the black box is false … brains are also black boxes, and we have made great progress in understanding how brains work," he said.

In demystifying this perception of neural networks, Kidd considers the work of people like Aude Oliva, executive director of the MIT-IBM Watson AI Laboratory.

"We were talking about this, and I said something about the system like a black box, and she reasonably rebuked me (saying) that, of course, they are not a black box. Of course, you can dissect them, disassemble them and see how they work and perform experiments with them, the same thing we do to understand cognition, ”said Kidd.

Last month, Kidd delivered the keynote address at the conference on Neural Information Processing Systems (NeurIPS), the world's largest annual AI research conference. His talk focused on how human brains cling to stubborn beliefs, attention systems and Bayesian statistics.

The Goldilocks area for the delivery of information, he said, is among a person's prior interests and understandings and what surprises them. People tend to commit less with too surprising content.

Then he said there is no neutral technology platform and focused his attention on how the creators of content recommendation systems can manipulate people's beliefs. Systems built for maximum commitment can have a significant impact on how people form beliefs and opinions.

Kidd ended the speech by talking about the misperception among men in machine learning that being alone with a colleague will lead to accusations of sexual harassment and end a man's career. That misperception, he said, can damage women's careers in the field.

For talking about inappropriate sexual behavior at the University of Rochester, Kidd was named Person of the Year of the Year in 2017, along with other women who helped achieve what we now call the #MeToo movement for the equal treatment of women. At that moment, Kidd thought that speaking would end his career.

In 2020, he wants to see a greater awareness of the real-life implications of technological tools and technical decisions and a rejection of the idea that tool manufacturers are not responsible for what people do with them.

"I have heard many people try to defend themselves by saying," Well, I am not the moderator of the truth, "he said. "I think there should be a greater awareness that it is a dishonest position."

"We really need, as a society and especially as the people who are working on these tools, to directly appreciate the responsibility that entails."

Jeff Dean

Google artificial intelligence chief

Dean has led Google AI for almost two years, but has been with Google for two decades and is the architect of many of the company's distributed network and early search algorithms and one of the first members of Google Brain.

Dean spoke with VentureBeat last month at NeurIPS, where he delivered talks about machine learning for the design of ASIC semiconductors and the ways in which the artificial intelligence community can address climate change, which he says is the most important issue of our time. In his talk on climate change, Dean discussed the idea that AI can strive to become a carbon-free industry and that AI can be used to help change human behavior.

He hopes to see progress in 2020 in the fields of multimodal learning, which is AI that relies on multiple means for training, and multitasking learning, which involves networks designed to complete multiple tasks at once.

Without a doubt, one of the biggest machine learning trends of 2019 was the continuous growth and proliferation of natural language models based on Transformer, the model Chintala referred to earlier as one of the greatest advances in AI in the last years. Google's open source BERT, a transformer-based model in 2018, and several of the best-performing models released this year, according to the GLUE leaderboard, such as Google's XLNet, Microsoft's MT-DNN and Facebook's RoBERTa, They were based on transformers. XLNet 2 will be released later this month, a company spokesman told VentureBeat.

Dean noted the progress that has been made, and said: "… all that research thread I think has been quite fruitful in terms of producing machine learning models that (let us now) perform more sophisticated NLP tasks than we used to do". . ”But he added that there is still room for growth. "We would still like to be able to make much more contextual models. Like now, BERT and other models work well in hundreds of words, but not 10,000 words as context. So that's a kind of (one) interesting direction."

Dean said he wants to see less emphasis on the small advances of the latest generation in favor of creating more robust models.

Google AI will also work to advance new initiatives, such as Everyday Robot, an internal project presented in November 2019 to make robots that can perform common tasks at home, the office and the workplace.

Anima Anandkumar

Nvidia Machine Learning Research Director

Anandkumar joined the GPU manufacturer Nvidia after his time as lead scientist at AWS. In Nvidia, AI research continues in several areas, from federated learning for medical care to autonomous driving, supercomputers and graphics.

An area of ​​emphasis for Nvidia and Anandkumar in 2019 were simulation frameworks for reinforcement learning, which are becoming more popular and mature.

In 2019, we saw the emergence of the Nvidia Drive autonomous driving platform and the Isaac robotics simulator, as well as models that produce synthetic data from simulations and generative adverse networks, or GAN.

Last year also marked the beginning of artificial intelligence such as StyleGAN, a network that can make people wonder if they are looking at a computer-generated human face or a real person, and GauGAN, which can generate landscapes with a brush. StyleGAN2 made its debut last month.

GANs are technologies that can blur the lines of reality, and Anandkumar believes they can help with the main challenges that the AI ​​community is trying to address, such as grasping robotic hands and autonomous driving. (Read more about the progress of the GANs made in 2019 in this report by VentureBeat AI staff writer Kyle Wiggers).

Anandkumar also expects to see progress in the coming year from iterative algorithms, self-supervision and self-training methods of training models, which are the types of models that can be improved through self-training with unlabeled data.

"I think that all kinds of different iterative algorithms are the future, because if you only do a feed-forward network, that's where robustness is a problem. While if you try to do many iterations and adapt them according to the type of data or precision requirements that you want, there are many more possibilities to achieve it, "he said.

Anandkumar sees numerous challenges for the AI ​​community in 2020, such as the need to create models made especially for specific industries along with domain experts. Policy makers, individuals and the AI ​​community should also deal with representation issues and the challenge of ensuring that the data sets used to train the models represent different groups of people.

"I think (the problems with facial recognition are) so easy to understand, but there are so many (other areas) where … people don't realize that there are privacy issues with the use of data," he said.

Facial recognition receives the most attention, Anandkumar said, because it's easy to understand how that can violate a person's privacy, but there are a number of other ethical issues that the AI ​​community must face in 2020.

"We will have increasing scrutiny in terms of how data is collected and how it is used. I think it is happening in Europe, but in the United States we will certainly see more of that, and for (the) right reasons, from groups like the National Board of Transportation and Security (NTSB) and the FTA (Federal Transit Administration). " she said.

One of the great surprises of 2019, in Anandkumar's opinion, was the speed at which the text generation models progressed.

“2019 was the year of language models, right? Now, for the first time, we reached the point of more coherent text generation and generation in the length of the paragraphs, which was not possible before (and) which is great, "Anandkumar said.

In August 2019, Nvidia presented the Megatron natural language model. With 8 billion parameters, Megatron is known as the world's largest transformer-based AI model. Anandkumar said she was surprised by the way people began to characterize models with personalities or characters, and hopes to see more industry-specific text models.

"We are not yet in the stage of generating dialogue that is interactive, that can track and have natural conversations. So I think there will be more serious attempts made in 2020 in that direction," he said.

The development of frames for text generation control will be more challenging than, for example, the development of frames for images that can be trained to identify people or objects. Text generation models can also come with the challenge of, for example, defining a fact for a neural model.

Finally, Anandkumar said he was encouraged to see that Kidd's speech at NeurIPS received a standing ovation and by signs of a growing sense of maturity and inclusion within the machine learning community.

"I feel that now is the decisive moment," he said. "At the beginning it is difficult to even make small changes, and then the dam is broken. And I hope it is so, because for me it feels that way, and I hope we can maintain the momentum and make even bigger structural changes and make everyone the groups, all here, prosper. "

Above: Photo credit: John O’Boyle

Dario Gil

IBM Research Director

Gil directs a group of researchers who actively advise the White House and companies around the world. He believes that the great advances in 2019 include the progress around generative models and the increasing quality with which plausible language can be generated.

He predicts continuous progress towards training more efficiently with reduced precision architectures. The development of more efficient AI models was an emphasis on NeurIPS, where IBM Research introduced deep learning techniques with an 8-bit precision model.

"The way we train deep neural networks with existing hardware with GPU architectures is still so inefficient in general," he said. “So, a really fundamental rethinking about that is very important. We have to improve the computational efficiency of AI in order to do more with it. "

Gil cited research that suggests that the demand for ML training doubles every three and a half months, much faster than the growth foreseen in Moore's law.

Gil is also excited about how AI can help accelerate scientific discovery, but IBM Research will focus primarily on neural symbolic approaches to machine learning.

In 2020, Gil expects AI professionals and researchers to develop a focus on metrics beyond accuracy to consider the value of the models implemented in production. Changing the field towards building reliable systems instead of prioritizing accuracy above all will be a central pillar for the continuous adoption of AI.

"There are some community members who can say:" Don't worry about that, just provide precision. Okay, people will get used to the fact that the thing is a bit like a black box, "or they will argue that humans don't sometimes generate explanations about some of the decisions we make. I think it's very, very important that we concentrate the intellectual power of the community to do much better in that. AI systems cannot be a black box in mission-critical applications, ”he said.

Gil believes in getting rid of the perception that AI is something that only a limited number of machine learning assistants can do to ensure that AI is adopted by more people with data science and software engineering skills.

"If we leave it as a mythical realm, this field of artificial intelligence, which is only accessible to selected doctors working on this, does not really contribute to its adoption," he said.

In the next year, Gil is particularly interested in neuronal symbolic AI. IBM will look for neural symbolic approaches to boost things like probabilistic programming, where AI learns to operate a program and models that can share the reasoning behind their decisions.

“By (adopting) this combined approach of a new contemporary approach to unite learning and reasoning through these neural symbolic approaches, where the symbolic dimension is integrated into the learning of a program, we have shown that you can learn with a fraction of that is necessary, "he said." Because you learn a program, you end up getting something interpretable, and because you have something interpretable, you have something much more reliable. "

The problems of equity, data integrity and the selection of data sets will continue to attract a lot of attention, as will "anything that has to do with biometrics," he said. Facial recognition receives a lot of attention, but it is only the beginning. Speech data will be seen with increasing sensitivity, as will other forms of biometrics. He then quoted Rafael Yuste, a professor at Columbia who works in neuronal technology and is exploring ways to extract neuronal patterns in the visual cortex.

"I give this as an example of everything that has to do with the identity and biometrics of people and the advances made by artificial intelligence in the analysis that will remain the center and attention," Gil said.

In addition to symbolic and common sense neural reasoning, an emblematic initiative of MIT-IBM Watson Lab, in 2020 Gil said that IBM Research will also explore quantum computing for AI and analog hardware for AI beyond reduced-precision architectures.

Final thoughts

Machine learning continues to shape business and society, and the researchers and experts VentureBeat spoke with see a series of trends on the horizon:

  • Advances in natural language models were an important story of 2019, as Transformers drove great advances. Look for more BERT variations and transformer-based models in 2020.
  • The AI ​​industry should look for ways to assess the model's results beyond accuracy.
  • Methods such as semi-supervised learning, a neural symbolic approach to machine learning and subfields such as multitasking and multimodal learning can progress in the next year.
  • Ethical challenges related to biometric data, such as voice recordings, will likely remain controversial.
  • Compilers and approaches such as quantization can grow in popularity for machine learning frameworks such as PyTorch and TensorFlow as ways to optimize model performance.

Do you know the transforming technology that VentureBeat should cover? Email editor Seth Colaner, senior staff writer at AI Khari Johnson, or personal writer Kyle Wiggers.


Please enter your comment!
Please enter your name here

Latest news

A cycle race was canceled with 2 virus cases, Frozen Frozen

DUBAI, United Arab Emirates (AP) - A major cycling race in the United Arab Emirates was canceled early in...

Coronavirus: Matignon advocates national union against the disease

From Marine Le Pen to Jean-Luc Mélenchon, via Olivier Faure - the boss of the PS who had requested...

Facebook cancels annual developer conference

The two conference days will be replaced by local sessions and live video content.Through Le Figaro with AFPFacebook headquarters...

Must read

You might also likeRELATED
Recommended to you