Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio, and synthetic data.Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent breakthroughs in the field have the potential to drastically change the way we approach content creation.
AI is a branch of computer science that deals with the creation of intelligent agents, and are systems that can reason, learn, and act autonomously.
Essentially, AI has to do with the theory and methods to build machines that think and act like humans.
How to define generative AI, Explain how generative AI works, Describe generative AI model types, Describe generative AI applications.
Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio, and synthetic data.
Two very common questions asked are: What is artificial intelligence, and what is the difference between AI and machine learning?
So one way to think about it is that AI is a discipline, like how physics is a discipline of science. AI is a branch of computer science that deals with the creation of intelligent agents, and are systems that can reason, learn, and act autonomously. Essentially, AI has to do with the theory and methods to build machines that think and act like humans.
Machine learning, is a subfield of AI.
It is a program or system that trains a model from input data. The trained model can make useful predictions from new (never-before-seen) data drawn from the same one used to train the model. This means that machine learning gives the computer the ability to learn without explicit programming.
Most common classes of machine learning models are unsupervised and supervised ML models. The key difference between the two is that with supervised models, have labels. Labeled data is data that comes with a tag, like a name, a type, or a number. Unlabeled data is data that comes with no tag.
In Supervised Learning, the model learns from past examples to predict future values. Unsupervised problems are all about discovery, about looking at the raw data, and seeing if it naturally falls into groups.
In supervised learning, testing data values (“x”) are input into the model. The model outputs a prediction and compares it to the training data used to train the model. If the predicted test data values and actual training data values are far apart, that is called "error". The model tries to reduce this error until the predicted and actual values are closer together. This is a classic optimization problem.
While machine learning is a broad field that encompasses many different techniques, deep learning is a type of machine learning that uses artificial neural networks, allowing them to process more complex patterns than machine learning. Artificial neural networks are inspired by the human brain. Like your brain, they are made up of many interconnected nodes, or neurons, that can learn to perform tasks by processing data and making predictions.
Deep learning models typically have many layers of neurons, which allows them to learn more complex patterns than traditional machine learning models. Neural networks can use both labeled and unlabeled data. This is called semi-supervised learning. In semi-supervised learning, a neural network is trained on a small amount of labeled data and a large amount of unlabeled data. The labeled data helps the neural network to learn the basic concepts of the tasks, while the unlabeled data helps the neural network to generalize to new examples.
Gen AI is a subset of deep learning, which means it uses Artificial Neural Networks, can process both labeled and unlabeled data, using supervised, unsupervised, and semi-supervised methods. Large Language Models are also a subset of Deep Learning. Deep learning models (or machine learning models in general) can be divided into two types – generative and discriminative.
A discriminative model is a type of model that is used to classify or predict labels for data points. Discriminative models are typically trained on a dataset of labeled data points, and they learn the relationship between the features of the data points and the labels. Once a discriminative model is trained, it can be used to predict the label for new data points. A generative model generates new data instances based on a learned probability distribution of existing data. Generative models generate new content. Generative models can generate new data instances and discriminative models discriminate between different kinds of data instances.
Traditional ML Supervised Learning process takes training code and labeled data to build a model. Depending on the use case or problem, the model can give you a prediction, classify something, or cluster something.
The generative AI process can take training code, labeled data, and unlabeled data of all data types and build a “foundation model”. The foundation model can then generate new content. It can generate text, code, images, audio, video, and more. In traditional programming, we used to have to hard code the rules for distinguishing a cat: type: animal, legs: 4, ears: 2, fur: yes, likes: yarn, catnip.
In the wave of neural networks, we could give the network pictures of cats and dogs and ask: “Is this a cat”? And it would predict a cat; or not a cat.
What's really cool is that in the generative wave, we - as users - can generate our own content - whether it be text, images, audio, video, or more.
For example, models like Gemini (Google’s multimodal AI model) or LaMDA (Language Model for Dialogue Applications) ingest very, very large data from multiple sources across the Internet and build foundation language models we can use simply by asking a question - whether typing it into a prompt or verbally talking into the prompt itself. So, when you ask it “what’s a cat”, it can give you everything it has learned about a cat.
GenAI is a type of Artificial Intelligence that creates new content based on what it has learned from existing content. The process of learning from existing content is called training and results in the creation of a statistical model. When given a prompt, GenAI uses this statistical model to predict what an expected response might be–and this generates new content. It learns the underlying structure of the data and can then generate new samples that are similar to the data it was trained on. Generative language model can take what it has learned from the examples it’s been shown and create something entirely new based on that information. That's why the word “generative.”
But large language models, which generate novel combinations of text in the form of natural-sounding language, are only one type of generative AI A generative image model takes an image as input and can output text, another image, or video. For example, under the output text, you can get visual question and answering. While under output image, an image completion is generated, and under output video, animation is generated.A generative language model takes text as input and can output more text, an image, audio, or decisions. Based on things learned from its training data, it offers predictions of how to complete this sentence. So, given some text, it can predict what comes next. Thus, generative language models are pattern-matching systems.
Using Gemini, which is trained on a massive amount of text data, and is able to communicate and generate human-like text in response to a wide range of prompts and questions.
The power of Generative AI comes from the use of transformers. Transformers produced the 2018 revolution in Natural Language Processing. At a high-level, a transformer model consists of an encoder and a decoder.
The encoder encodes the input sequence and passes it to the decoder, which learns how to decode the representations for a relevant task. Sometimes, transformers runs into issues though. Hallucinations are words or phrases that are generated by the model that are often nonsensical or grammatically incorrect.
Hallucinations can be caused by a number of factors, like when the model: is not trained on enough data, is trained on noisy or dirty data, is not given enough context, or is not given enough constraints. Hallucinations can be a problem for Transformers because they can make the output text difficult to understand. They can also make the model more likely to generate incorrect or misleading information.
Foundation models have the potential to revolutionize many industries, including healthcare, finance, and customer service. They can even be used to detect fraud and provide personalized customer support. If you're looking for foundation models, Vertex AI offers a Model Garden that includes Foundation Models. The language Foundation Models include chat, text, and code.
The Vision Foundation models includes stable diffusion, which has been shown to be effective at generating high-quality images from text descriptions. Let’s say you have a use case where you need to gather sentiments about how your customers feel about your product or service, you can use the classification task sentiment analysis task model. Same for vision tasks - if you need to perform occupancy analytics, there is a task-specific model for your use case. So those are some examples of foundation models we can use, but can GenAI help with code for your apps?
Using Google’s free-browser based Jupyter Notebook and can simple export the Python code to Google’s Colab.
So to summarize, Gemini code generation can help you: Debug your lines of source code, Explain your code to you line by line, Craft SQL queries for your database, Translate code from one language to another, Generate documentation and tutorials for source code.
Vertex AI which is particularly helpful for all of you who don't have much coding experience.You can build generative AI search and conversations for customers and employees with Vertex AI Agent Builder (formerly Vertex AI Search and Conversation).Build with little or no coding and no prior machine learning experience.
Vertex AI can help you create your own: chatbots, digital assistants, custom search engines, knowledge bases, training applications, and more.
Gemini, a multimodal AI model. Unlike traditional language models, it's not limited to understanding text alone. It can analyze images, understand the nuances of audio, and even interpret programming code. This allows Gemini to perform complex tasks that were previously impossible for AI. Due to its advanced architecture, Gemini is incredibly adaptable and scalable making it suitable for diverse applications.
Special Note :- You can get free certificate from Google through this course Introduction to Generative AI by Google .
Reference use :- Introduction to Generative AI by Google , jupyter.org
No comments:
Post a Comment