The Belamy: Favorite Deep Learning Techniques of Facebook & Deepmind 🏗🚦🎟
And other top AI stories of the week
Last week, we announced our first in-person conference in 2 years. Machine Learning Developers Summit, in its 4th year, is scheduled in hybrid mode (in-person & virtual) on 19-20th Jan.
All talks will be delivered in-person with a limited in-person attendance capacity of 200. It’s coincidental that our last in-person conference was MLDS 2020 and we are starting back with MLDS 2022.
Date: 19-20th Jan 2022
Venue: Hotel Radisson Blu, Bangalore
Look forward to once again engaging with the ML developers community. Register here>>
1
Facebook Loves Self-Supervised Learning
Facebook’s chief AI scientist Yann LeCun’s influence seems to have rubbed off on the team, taking a path less travelled – a journey towards self-supervision. Today, Facebook has become synonymous with self-supervision. What was once a research strategy for Facebook AI teams – over the years – has turned into an area of scientific breakthrough.
For instance, its pre-trained language model XLM, first introduced in 2019, is accelerating important applications at Facebook today, like proactive hate speech detection. Facebook AI Research has made significant strides in self-supervised learning in the last two years, including techniques like MoCo, Textless NLP, DINO, 3DETR, DepthContrast, etc.
2
Zerodha Vs Groww
Today, Zerodha competes with a slew of investing platforms, including Groww, ETMoney, Upstox, Paytm Money, ProStocks, 5paisa, etc. But the question is, how is Zerodha managing to stay ahead of the game, despite the growing competition in the space?
In the coming years, Groww could emerge as a direct contender to Zerodha. To this, team Zerodha said that their technology/product offerings are objectively exhaustive and ahead of other players in the market.
3
The King of Reinforcement Learning
DeepMind is perhaps amongst the only few players to have mastered the art of reinforcement learning. So much so to an extent where it has successfully applied reinforcement learning in many domains, including systems like AlphaZero.
DeepMind’s AlphaFold, the latest proprietary algorithm, predicted the structure of proteins in a time-efficient way. Also, DeepMind MuZero matches the performance of AlphaZero on Go, chess and Shogi, along with mastering a range of visually complex Atari games.
4
Deloitte & Accenture Bumper Results
Ireland-based Accenture and UK-based Deloitte, two big names in IT services, have reported robust revenue growth exceeding $50 billion in FY21. Accenture, in its report, has announced its revenue to be $50.5 billion, which saw an increase of 14% in US dollars compared with the fiscal year 2020. Similarly, Deloitte reached $50.2 billion in revenue with an increase of 5.5 per cent.
The tremendous revenue figures reported by both Accenture and Deloitte indicate how well the consulting and outsourcing business is thriving despite the COVID-19 crisis.
5
Video of the week
This week's episode is a weekly update from the world of data science, everything from new launches, the latest research, cyber threats, and events that have happened in the last 7 days will be covered here.
6
Hands-on Guides for ML Developers
+ A Guide to Hidden Markov Model and its Applications in NLP
+ How Machine Learning can be used with Blockchain Technology?
+ An Introductory Guide to Few-Shot Learning for Beginners
+ A Tutorial on Survival Analysis for Beginners
+ A Guide to Stochastic Process and Its Applications in Machine Learning
7
PEOPLE & STARTUPS
+ NVIDIA’s Vishal Dhupar talks about Omniverse, Leveraging India’s talent and upcoming GTC
+ Digital Foundations Are Becoming The Bedrock Of All Organisations: Akhilesh Ayer, WNS
+ All About Digital Twins: Interview With Vinay Jammu, GE Digital
+ Transition From UPSC To Data Science: A Personal Account
+ To Be A Data Scientist, You Must Be A Software Engineer: Antrixsh Gupta, Danalitic
+ How Bengaluru-based KreditBee Leverages AI And ML To Democratise Credit
8
Bigger than GPT-3
Earlier this week, in partnership with Microsoft, NVIDIA introduced one of the largest transformer language models, the Megatron-Turing Natural Language Generation (MT-NLG) model with 530 billion parameters. The language model is powered by DeepSpeed and Megatron transformer models.
Interestingly, MT-NLG has 3x the number of parameters compared to the existing largest models, including GPT-3 (175 billion parameters), Turing NLG (17 billion parameters), Megatron-LM (8 billion parameters), and others.
9
Our Upcoming Events
Masterclass | Performance Boosting ETL Workloads Using RAPIDS On Spark 3.0 | 20th Oct 2021 | Register>>
Virtual Conference | AWS Data & Analytics Conclave | 21st Oct 2021 | Register>>
Masterclass | Accelerate Data Engineering | 27th Oct 2021 | Register>>
Bangalore | Machine Learning Developers Summit 2022 | 19-20th Jan 2022 | Register>>
10
BOTTOM OF THE NEWS
Here's what all happened last week.
+ IBM Aims to Skill More Than 30 Million People Globally By 2030
+ InMobi Signs Definitive Agreement To Acquire Appsumer Analytics, An Insights Platform For Marketers
+ OpenCV 4.5.4 Released, Look For Updated Features And Fixes
+ Accenture to Acquire Bangalore Based Boutique Analytics Firm BRIDGEi2i
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.