AI Security & Governance Certification

Course content

Create Account

Log in / Create account to save progress and earn badges

AI Security & Governance Certification
View course details →

Technology Underlying the Generative AI

Mark Complete Enroll now to save progress and earn badges. Click to continue.

Two key technologies underlying the generative AI revolution are (a) transformers, and (b) diffusion.

Transformers are typically used in text data but can be used for images and audio. They are the basis for all modern Large-Language Models (LLMs) because they allow neural networks to learn patterns in very large volumes of (text) training data. The result is the amazing capabilities observed in text generation models.

Diffusion models have overtaken Generative Adversarial Networks (GANs) as the neural models of choice for image generation. Unlike the error-prone image generation process of GANs, the “simplified” image generation process of diffusion models works by iteratively constructing an image through a gradual denoising process. The result is a myriad of new AI-based tools for generating and even editing images with useful outcomes.

According to McKinsey, just generative AI has the potential to contribute between $2.6 trillion and $4.4 trillion to annual business revenues. More than 75% of this value is expected to come from the integration of generative AI into customer operations, marketing and sales, software engineering, and research and development activities.

XML Sitemap

Gartner Customers Choice Gartner Cool Vendor Award Forrester Badge IDC Worldwide Leader Gigaom Badge RSAC Leader CBInsights Forbes Security Forbes Machine Learning G2 Users Most Likely To Recommend IAPP Innovation award 2020