NVIDIA is all about changing the game. Literally! Last year, NVIDIA’s head honcho Jensen Huang made it to the Time’s list of the world’s hundred most influential people. Last week, the bossman made headlines (as is his wont) after his keynote speech at the GTC 2022 broke the internet. This week’s newsletter is all about NVIDIA’s flagship AI event.
The major announcements at GTC 2022 include NVIDIA H100 Tensor Core GPU-based Hopper architecture , AI models and libraries (Triton, Riva 2.0, NeMO Megatron 0.9 , Merlin 1.0, Maxine), Grace Superchips, new OGX Server for Omniverse, fast networking, new DGXServers and Pods, enterprise AI Software, next-generation Hyperion Drive and Omniverse Cloud.
Below, we have put together a highlight reel of Huang’s almost-2 hours keynote speech (9 mins):
The week-long conference, from March 21-25, featured close to 900+ sessions focused on deep learning, Omniverse, data science, robotics, networking, and graphics. Leaders from organisations such as Flipkart, Amazon, Autodesk, Bloomberg, Cisco, DeepMind, Epic Games, Google Brain, Lockheed Martin, Microsoft, NASA, Pfizer, Sony, Stanford University, Disney, Zoom presented at the conference.
Thanks for reading The Belamy | Weekly dose of best AI stories! Subscribe for free to receive new posts and support my work.
Read more stories on NVIDIA GTC 2022 below:
At the virtual press conference at GTC 2022, Huang expressed his interest in partnering with chipmakers in India . “Our three largest geographies are California, China and India, with India being slightly larger than China. I believe that AI is going to absolutely revolutionise the tech industry in India,” he said.
India's data science professionals' attrition rate is skyrocketing. According to AIM Research, the attrition rate for data science/analytics professionals stood at 28.1 percent in 2021, compared to 16 percent the previous year.
The complete report on data science and analytics attrition by AIM Research can be accessed here>>
People & Tech
Edward Hu, a deep learning PhD student at Mila, advised by Yoshua Bengio and Microsoft researchers Greg Yang and Jianfeng Gao, recently introduced µ-Parametrization . The technique offers maximal feature learning even in an infinite-width limit. The researchers further collaborated with OpenAI to demonstrate its practical advantages.
AIM caught up with Edward Hu and Greg Yang to learn more about their research. Read the story here .
Inside intra-city logistics marketplace Porter ’s data science team
Startups, Apps & Tools
Suki.AI uses NLP and ML to deliver fast and accurate voice experiences.
Create your own manga characters with the help of a simple ML tool.
Spaces is a one-stop-shop for developers looking for amazing machine learning apps.
Article in Focus
Generative models have the potential to produce photorealistic images that look similar to training data. So, do we still need datasets if we have good enough generative models? MIT researchers Ali Jahanian , Yonglong Tian , Xavier Puig , and Phillip Isola have investigated this question in the setting of learning general-purpose visual representations from a black-box generative model rather than directly from data. Read the complete story here .
Research & Papers
Researchers at Technical University, Berlin, have proposed a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops. Check out the full story here .
Is there a quantum analog for the NFL theorem? Find out here .
Social media dialogues
Event in Focus
Intel®, in collaboration with Analytics India Magazine, announced a masterclass on AI innovation with oneAPI on April 13, 2022, from 3:00 PM to 4:00 PM.
You can register for the event here.
Top News of the Week