The Rise Of SLMs In 2026 | Future Of SLMs In AI Deployment | SLMs vs LLMs Explained | Simplilearn
Chapters9
Host introduces the session, greets attendees from around the world, and outlines the webinar structure.
A practical deep dive into small language models (SLMs) vs. LLMs in 2026, with hybrid deployment strategies, real-world use cases, and a clear framework to decide the right tool for the job.
Summary
Simplilearn hosts Kennedy, an AI/ML engineer, to unpack how SLMs are changing AI deployment in 2026. The discussion contrasts SLMs with traditional LLMs, explains on-device and edge use cases, and highlights why enterprises are adopting hybrid architectures to balance cost, latency, and data privacy. Kennedy walks through practical deployment considerations, including when to fine-tune, quantize, or distill models, and how to route tasks between SLMs and LLMs for optimal performance. The session emphasizes ROI, latency, and privacy as core drivers for choosing SLMs, and it provides concrete examples like HR chatbots and on-device assistants. The presenters also touch on the real-world uptake, noting that most production tasks don’t need GPT-4-level power—just a precise, fast model that excels at a single job. The talk closes with career implications, noting how roles like ML engineers are evolving toward prompt engineering, RAG, and system design for hybrid AI architectures. Finally, Simplilearn pitches a Michigan Engineering–IBM–Simply Learn program designed to bridge the skills gap with hands-on projects and industry exposure. The overall message: learn to pick the right model for the right task, build robust hybrid pipelines, and stay ahead with practical, project-based training.
Key Takeaways
- Hybrid AI is often the optimal path: use SLMs for defined tasks and reserve LLMs for broader reasoning to balance cost and latency.
- On-device SLMs address data privacy and reduce cloud dependency by keeping sensitive data in enterprise infrastructure.
- Fine-tuning, quantization, and distillation are key techniques to tailor foundation models to task-specific needs.
- ROI and latency considerations drive SLM adoption: smaller hardware, faster responses, and lower operational costs matter for production apps.
- -Evaluating models should be task-centric, not one-size-fits-all; the right tool depends on data sensitivity, required reasoning, and throughput needs.
- Edge and hybrid architectures are practical for enterprises looking to scale AI without sacrificing control.
- Career paths are shifting toward prompt engineering, system design, and MLOps for hybrid AI setups.
Who Is This For?
This is essential watching for AI/ML engineers, data scientists, and IT leaders who are trying to decide when to deploy SLMs vs. LLMs and how to structure hybrid AI pipelines in production. It’s also valuable for professionals considering the Michigan-IBM-Simplilearn program to upskill with hands-on, real-world projects.
Notable Quotes
""A very precise fast cheap model that does one thing brilliantly""
—Kennedy’s core takeaway about production AI tasks not needing GPT-4-level power.
""Hybrid approach to it where you use light language model for specific use cases and you use small language model for specific use cases""
—On why hybrids are practical in production.
""The data privacy concerns are addressed by running the model on-prem within your own internal infrastructure""
—Data privacy benefits of SLMs in enterprise settings.
""You are not training a model from scratch, but you’re taking an existing model and augmenting it""
—Shifts in roles and workflows for ML engineers in the era of SLMs/LLMs.
""Think from a system design perspective how to operationalize this different light language model or a hybrid""
—Emphasizing architectural thinking for production systems.
Questions This Video Answers
- How do you decide between using a small language model vs. a large language model for a specific business task?
- What is a hybrid AI architecture and how can it reduce cost and latency in production?
- Can on-device SLMs protect sensitive data better than cloud-based LLMs?
- What skills should I develop to stay relevant with SLMs and LLMs in 2026?
- What is RAG and how does it fit into SLM/LLM deployments?
Small Language Models (SLMs)Large Language Models (LLMs)Hybrid AIMLOpsRAG (Retrieval-Augmented Generation)On-device AIEdge AIModel quantization/distillationPrompt engineeringAI governance and data privacy
Full Transcript
Hey, I see a few messages that have come in. Let me just read out a few introductions in a while. Hi Daniel, thank you for joining us from Kenya. Hi, there's someone from Virginia but we I don't have your name. Um Swati, thank you for joining us. Hi, there's a side. Um someone Anthony I think from Chicago. Hi um Fred from Philippines. Hi Swati. Thank you for joining us. Hi Akillesh from Mumbai. Arti from Okay, we don't have that detail. Hi Roti from India. Hi Radika from India. Abilage from India. Dr. Thomas from Nigeria. Great amazing amazing.
A lot more introduction. Sorry I'm unable to read um every um every single introduction but super super great to have you all here. We are also live on LinkedIn and YouTube and we have more participants watching us from all over the world today. So um really excited to get started. Uh let me quickly introduce myself. My name is Anana. I would be hosting the session on behalf of Simply Learn. This is the rise of SLMs. What ML and data professionals need to know. Now we're here to talk about something that's quietly becoming one of the most important conversations in the AI space right now.
Small language models or SLMs. And uh trust me by the end of today's session you'll walk away with a very different way of thinking about how uh AI actually gets deployed in the real world. And before we kick off a quick note this webinar is being brought to you by simply learn uh in partnership with one of the world's leading engineering institutions within which is um Michigan engineering professional education from the University of Michigan. We have a fantastic expert also with us today and we're going to have a really rich conversation. So let's uh get started.
Okay, just a few quick ground rules before we dive in. If you have any question uh and we really hope uh that you do, please drop them in the Q&A box. That's the best way to make sure that we catch them because as you can see the number of uh pings that keep coming are a lot. So we might end up missing an important question in the chat box. Um whereas you can use the chat box but please uh keep it for general conversation relevant to the topic. Um please do not share any personal details or irrelevant details.
This will be monitored very closely and deleted if there is anything inappropriate. And um very importantly if you would like an attendance certificate for today or the recording of the webinar please make sure to enter your full name in the poll that we'll launch at the end of the session. Um we will not be able to take any manual entries, manual submissions for the recording or the certificate. So please do stay tuned with us um till the end. How long is the question um regarding how long the session will be? It will be a 1 hour session and we hope to wrap up uh within time.
Um and like I said the recording will be available for all of you who participate in the poll and give us your full name. Okay. So um a quick poll. Uh is there anyone else uh anyone in the audience who has attended uh simply run webinar before? I think like I've seen some of you in our previous sessions as well. But if it's your first time here, please just type first time and let us know in the chat box chat box. Okay, I'll just wait a moment for more responses to come in. Okay. Wow. Great.
First time. First time. First time. A lot of first- timers here. Very, very great. Okay. Okay, so for those of you who are joining um us for the first time, let me give you a very very quick sense of who we are. Simply learn is a global digital skills platform with a very simple mission that is to transform lives by helping people build the skills that really matter in today's world. We have helped advance uh the careers of over 15 million people across 150 plus countries and we work with 50 plus glo global partners to make that happen.
We consistently ranked among the top training companies in the world and we're proud of the fact that our learners and graduates rate us really highly. So you are um in great hands today as well. A big part of what makes Simply Loans program stand out is who we work with. Our partner ecosystem spans some of the most respected universities and companies in the world across the US, India and globally. These aren't just you know logos um on a slide. their active collaborations who help shape our curriculum, provide industry exposure, and lend real credibility to the credentials our learners earn from um our programs.
We'll come back to this context when we talk um about the program later. For now, let's focus on why you're all here today. And before we jump into the content, I just want to give you a quick snapshot of what made SimplyLearn programs different because it's relevant to what we will share later. An 80% graduation rate and a 4.8 out of five rating. Um these are, you know, these are numbers that reflect a learning experience that's very practical. Um that's live and very deeply relevant to what the industry demands. Currently our programs are co-created with industry partners like I mentioned and taught by practitioners and not just academicians.
Um so what you would be building is real world skills and not just theoretical knowledge. Right? Okay. So moving on here's a quick look at what we are going to cover today. We are going to move through this as a conversation and not a lecture. So it should be very um you know engaging for everyone here. We will start by asking um you know a simple but uncomfortable question maybe um which is is the AI your organization is using actually costing you more than it should um then we'll get into what SLMs actually are why enterprises are shifting towards them how they are being used in the real world how to choose between an SLM and an LLM and what all of this means for your career and your role um and your future in AI.
So the last part of today is particularly important for anyone thinking about where the AI space is heading and what skills you need in order to stay relevant in the coming years. So please do stick around for that. Okay. So let me now bring in our expert for today. I'm really really looking forward to hosting this session with him. Uh Kennedy is a senior a IML engineer with a very rich background spanning data science, machine learning and data engineering. He's worked with some of the most well-known names in the tech space including Amazon Web Services and currently works at cloudpace where he continues to build and deploy scalable machine learning systems.
His expertise span generative AI, MLOps, deep learning, re reinforcement learning and more. And he brings real production experience to everything that he talks about. So the insights you're going to hear today um you know will go well beyond theory. they're um because they are directly from someone who actually you know built these systems. Kennedy, a very very warm welcome to you. Great to have you with us. Over to you for a quick hello before we kick off and then we'll get into the content. Yeah, thank you and uh thank you for the good introduction and welcome everyone to this session.
I do see I was looking at the chat here and I do see uh we got uh people joining from different parts of the world. I think some places it is even night. So I thank you for finding time to join and I look forward to um uh be part of this conversation that we're going to have here today regarding the uh small language models. Uh my background I think a lot have already been mentioned. So currently I have about um um more than 8 years of experience in the industry around machine learning, artificial intelligence, deep learning, MLOps and I'll be happy to have more discussion about that as we continue with the webinar.
Uh back to you. Perfect. Thank you so much Kennedy. It's amazing to have you here. Okay. So we will get right into the uh first segment. uh before we talk about you know the world of SLMs I think it's important to sort of set the stage for um this topic a lot of organizations these days are using AI very very um you know prominently like charg various cloud-based models have become common um and part of our everyday lives on the surface it seems like everything is working everyone is um in the AI adoption mode and it's like you know interesting and exciting to try new new models new tools etc.
But I want to ask you um is there actually a problem here that people are not talking about enough particularly when it comes to the reality of what AI is costing us? Yeah, good question. So um first of all when it comes to the deployment of this light language model or a application in the real world in for your production environment there are a lot of factors that you have to take into account and um you're thinking about the resources the deployment resources that you'll need to deploy this. Hence as a result there are instances where um using a light language model might not be the right kind of tool that you do need but you could use even a small language model with understanding what exactly is a small language model.
Let's take a step back and understand what is a small language model and then now contextualize it to the question that you've asked. Now a small language model is basically um a model that is trained on um either fine-tuned or trained on very few parameters and also for a very specific use case. There are different strategies of taking maybe like um uh already existing large weight model then the different strategies to make it smaller by fine-tuning it on very specific use cases domain specific use case. So you could have um uh maybe a lightweight model maybe you have quantized it through the process of quantization or maybe you have like um um you know distill it through the process of distillation that allows you to have a smaller model what which is capable.
So back to the question in terms of the costing itself. So yes, light language model in general in terms of the token more if you pick like um let's pick like the most uh uh the the very current up to date ones like whether that is or pass 4.6 six or these lighter language models, right? So their cost in terms of the token uses usually like really very high comparatively. So if your specific use case does not require um a small I mean a light language model then a small language model will suffice. And again as we're going to have the this discussion we are going to look at what what are some of the instances the factors that you need to take into account when it comes to choosing do I need to use a large language model or maybe use like a small language model.
Uh I think you're muted. You're muted. Yes. My apologies. Thank you. Um so I'm just going to keep myself on mute so that there isn't an echo. Um yeah. So thank you for drawing the comparison Kennedy. I think that sets the perfect context. And my next question was actually about um um you know um was was going to ask you to explain how um you know what SLM is all about. So I'll move on to the next slide. If there's anything that you know that we need to deep dive into from this slide then you can take it up.
Maybe the you know the use cases of uh small language models. Yeah exactly. So um let's take a quick comparison of uh SLM looking at the the slides here. Let's take a quick comparison of the SLM and and light language model. So what makes a small uh I mean a language model to be called small. So first of all this is the model size itself right. So when it comes to uh the model size for small language model they uses they do use fewer parameters compared to large language model. the hardware needs. They need low hardware needs in terms of for you to be able to uh operationalize them.
In terms of the latency, light language, I mean small language models are comparatively faster. Uh the different versions or varieties of small language model then those come with specific latencies in terms of the use case itself. the cost uh comparatively they are also less costly if you compare that with a like with a large language model. So and and and and and I I I think I've already addressed briefly what makes a large language I mean a a model to be referred to as a small language model and what I want to talk about is um as you can see there so we talk about the parameter count uh that's one and also when it comes to uh the task itself so small language model are very task specific you could take your large language model for example you could fine-tune it and make it for a specific use.
What do I mean by this? So, say you've got like a foundation model. Uh what what is a foundation model? Well, a foundation model uh this light language model that I trained for uh I would say um general purpose. So, you could refer to them as they're general purpose kind of uh models light language model. You could take that foundation model then you can be able to uh uh undergo maybe perform some kind of finetuning to it or maybe uh do some kind of quantization or distillation to it. Then you come out with a small language model that is very task specific.
So you train it on a very specific task then that is what you use it for. And when we talk about age readiness now to be able to deploy a large language model on edge say like on device I believe we all have like smartphones. So say you got like your iPhone 17 Pro or your Samsung Galaxies and all that, right? And you want to deploy like um LLM to that device to be able to deploy light language B on device is going to be really challenging in itself because you need to take account of all these different factors like the hardware need the latency and all that.
So the small language model one of the um one of the advantage that it comes with is that due to the uh comparatively low hardware needs requirement you can be able to deploy a small language model like on device for example on like your phone. Another advantage with the small language model because of the small size because of the uh uh the fewer the latency the that the cost that is involved you could be able to easily deploy this with uh within your own I mean uh manage this and deploy it within your own environment without necessarily having to rely on like the cloud itself using the cloud deployment because you could run it on your own hardware on premise uh or maybe within your own uh enterprise infrastructure.
Then those those are some of the factors that make a small language model more um I guess more more desirable or when it come more relevant for some specific use case when you need to think about like data privacy and etc. Okay thank you Kennedy. Um there is a question from Romesh about the combination of multiple SLMs um used as alternative LLM. Actually we will be covering this in one of the upcoming slides. So um Romesh I hope you can stay tuned um for for an answer to that question. Um Kennedy maybe you can take the other question that's come from Arian Khan.
What are the primary architectural differences between an SLM and an LLM? The primary architectural differences. Yeah. So in terms of let's talk about first of all the light language model. A lot of majority of the large language models actually are uh transformer based but a little bit of um uh I would say different flavors in terms of how the implementation of the original transformer based uh which came out in 2017 from the attention it's all you need paper. So talk about like uh if we talk about some of the most frontier light language model out there within the within the market, we got like maybe Kim 2, we got different ones, right?
Uh we got Deepseek. Originally they all use um an uh they based on the transform architecture but are different in terms of the implementation of the transform architecture itself. Now back to your question regarding the what is the the difference between the the small and light language model. So the key difference here is that with small language model let's say you got most of the light language model are trained on the pre and on the 32 floating point precision. So if you take like a 32 floatingpoint precision model that is trained on that then what you could do is that you can quantize it to a lower precision which could be maybe int 8 or int4.
The difference here is that for the quantized model that you made it smaller by just reducing the precision of the model weights itself. So the underlying architecture itself have is not so much of the bigger difference but the big difference here is just the precision on how you've quantiz it and reduce it to a smaller precision representation of it. Perfect. Thank you for answering that. Um moving on. So you know all of this is very fascinating. We've talked about what SLMs are in theory but what does it actually look like when a company puts one to work?
I think a lot of people also might be wondering in the audiences um hey this sounds great but is this actually being used or is it mostly still a conceptual thing that's being researched like what's the reality of it in ground when it comes to production? Yeah good question. So it doesn't mean that uh actually um small language model are replacing or companies are not using um light language model but again there are several factors that you have to take into account when you want to productionize your you know um aentic or LLM applications. One of those could be the um you need to think about the ROI the return on the investment.
Now companies are using both an hybrid approach to it where you use light language model for specific use cases and you use small language model for specific use cases and is this something that is already is this is this like a theory no it's not a theory I mean this is something that um already takes place within the different I mean within different industry in terms of what is being used and if you can see here from your slide Um I cannot go through all of them but you can see here from the slide itself in terms of the different use cases where maybe you got something like llama 3 I mean a llama 7 billion parameters where it is being used uh maybe like GUN for example where it is being used in terms of the customer support use cases.
So yes, small language model are being used in the industry but not as a replacement of large language model but rather as an hybrid for the augmentation depending on just your specific use case and I'm happy to take any question anyone is having to dive a little bit deeper on that. Okay. Okay. So we'll we'll wait for some questions to come in. Meanwhile, I'll move on to the next slide. So, what are some factors actually pushing organizations and enterprises um you know towards the shift like how what um what are the main reasons why people are um slowly navigating towards SLS?
Yeah. So, um on top of mind here um the ROI to start with let's start with that. So the cost itself because using light language model imagine you are let's take a very simple example maybe you are building like um um HR kind of an application HR assistant that leverages AI in the back end to be able to respond to your HR related questions right so maybe you are building something like you got like a rag system that in the back end that you're leveraging you got your internal documentation HR policies and all that for for I mean uh relevant to your enterprise itself.
Um it does not perhaps makes a lot of sense to be able to take the most expensive light language model out there just because it is fast to be able to use it for that specific uh use case itself. So the cost is driving this whereby instead of using a very expensive light language model but while actually the task that you need to perform does not warranty such kind of I mean the the power that comes with the light language model you could use just ISLM as well to be able to achieve the same task.
So cost is one. So just swapping looking at where exactly would a light language model make sense for me in terms of my business requirement and where would I use a small language model. Another one again is when it comes to uh data privacy. We've talked about one of the challenge or one of the key concerns that a lot of enterprises do have when it comes to using AI is security data privacy cuz the first question enterprise would ask is um where is this data going to go? Who got access to this data? Whether I'm using whichever any of the light language provider are they going to have access to my enterprise data.
Right? So those are concerns that a lot of enterprises do have. But in the case of a small language model as we have mentioned in terms of even the hardware requirements and all that you could be able to actually run this on prem within your own internal infrastructure. Hence addresses the question of the data privacy. Another reason is also when it comes to u the latency comparatively because of their small sizes uh most language model as well uh if you compare to large language model I've have got really also lower uh uh I mean the latency that for for you to be able to inference time that is not taking into account their infrastructure that you're running them on but comparatively they are much more faster when it comes to maybe for your inference time so that is another thing that is driving enterprises in using the small language model and then also lastly task specific.
So as we have mentioned at the beginning we talked about that small language model are really um task specific in terms of your finetuning you can fine-tune them for specific task itself then that makes them more powerful instead of using just a general uh language model then use the one that is fit for the specific task or the job that you need it for. Great. Thank you so much, Kennedy. I see a lot of questions that have come about how do we decide uh whether to use SLM or LLM and like you know um about hybrid use.
We will be covering that in the future slides. So please do stay tuned. Um but one question I think we can take now um Kennedy is just let me put that. Okay. What are like can you give us maybe a couple of examples or like real real life applications of SLMs like so if I have to start using SLMs from tomorrow like what uh where can they where can I apply them? Yeah. So SLM really got really wide um application real world application where you could use them. It just depends on what exactly is it that you're doing.
So let's take a very simple example maybe something that all of us we do have we do have like phones right um um for your smartphone with a smartphone um majority of this uh you know this smartphone these days got like some kind of AI capabilities what are they using there is um a small language model uh that perhaps is within ondevice small language model that are being used that is a very practical examples that maybe you already have a phone which is already using AI when it comes to different like uh for enterprises depending on which vertical or whatever it is that you're doing as an enterprise I'm very sure there is a um there are use cases that let's say even for document processing itself I've talked about building something like a chatbot for your AI assistant maybe for your HR or whichever that you're using for your enterprise using like a small language model there would be able would suffice because you could take a small language model that it train on maybe uh question and answering and um you also fine-tune it on your enterprise specific data and that can allow you to be able to use a really very focused small language model instead of taking like a large language model that perhaps is not very focused to your specific use cases.
Perfect. Perfect. Thank you. So um I think that perfectly sets uh context um um you know for the next segment um and rounds up the first u um you know segment that we've uh we just discussed about and um this is quite a powerful statement that we have here. Most production AI tasks do not need GPT4. They need a precise fast cheap model that does one thing brilliantly. Um Kennedy I think there might be like you know sort of fear in the industry that if you use small small language model um you are probably somewhere settling or compromising on the quality.
Is that is that true? Like how do you how do you tackle such a dilemma? Can you repeat that again? uh uh regarding the quality. Sorry. Yeah. So like you know is there um um what would you like to say to people who think that if you're using a smaller language model which is cheaper you might be compromising on the quality of the output. Um so is that the case? Is there a difference in terms of the quality of the output? Yeah. So I would say um it's about compromising on the quality. Just like I mentioned each of these different type of models do got their own specific would I say what they are good at.
If you got a task that perhaps require um uh general kind of reasoning then if you take like a small language model and a large language model then a light language model is stronger when it comes to general kind of reasoning itself. If you want to think about also in terms of just the domain specialization then uh a smaller language model will excel in a domain specialization as compared to um you know with a large language model. So it is not about compromising on the quality but rather choosing the right tool for the right job.
So if your task specific is something which is um general reasoning for your specific enterprise use case then you don't want perhaps to pick a small language model for that you'd go for uh a large language model. So it is a question of hybrid use of it. You pick the right tool for the right job. Okay great perfect. And that's what we're going to discuss about next. Um so SLM versus LLM if you have to like you know put them head on. How should people actually be thinking about this choice? Can you give us a framework for making the language model decision?
Yeah. So um I think when we starting this conversation I mentioned that uh whenever you want to make a choice between um what what factors do you need to take into account when choosing either to go for um a light language model or um a small language model for your specific use case. So number one first of all you have to really define what exactly is the task that you want to achieve. What exactly do you want to achieve? uh with your application. So if your task is something that is well defined, well scope, well structured, then you'll take a look at SLM and LLM.
And as I mentioned, LLMs are good for a general purpose kind of reasoning. But if your task is something that is very specific and well defined, then you'll go for SLM instead of um LLM because you don't need all that power that comes with uh with the live language model. just to be able to perform or rather to uh perform a task that require really that's already well defined in itself. Number two, if you think about in terms of the latency, uh let's say for your specific application and then and and then again I'm not trying to like generalize here uh because when you are specific application depending on like uh what you're trying to build requires a really faster latency then uh a small language model uh would be a better candidate comparatively to a large language model.
And again here remember when we talk about small language model and the large language model they are different would I say in terms of versions I would say that of small language model there's small language model that have maybe 128k um token and there's one that are fewer less than that but again so um in general when we make the comparison then you find that if you were to take a small language model and a large language model generally you'll get low latency when you are using um uh small language model. The data privacy is of a key concern.
We've talked about this and the main reason why um when it comes to uh data privacy more so for the uh industries that are you know regulated industries that requires a lot of compliance and regulations around them in terms of how you uh conduct your business. Then small language model here helps in addressing that in the sense that um if you're really not deploying your model in the cloud somewhere. If your model your data is not getting outside your uh your your your infrastructure itself then that gives you some kind of comfort in terms of the security of your data that is where a small language model will also be uh will be a good candidate for it.
uh the budget itself uh the cost so with the cost light language model requires a lot of uh infrastructure setup their hardware to be able to run them. You're going to run these GPUs that you do need these accelerators to be able to run this large language model. Comparatively the infrastructure needs for um uh for small language model uh is less compared to light language model. the infrastructure needs uh drives a cost as well. That is why you do see a lot of these data centers everywhere because the more companies they need to leverage and use this large language model the more the resource the compute resource that is needed those compute resources come at the cost.
So it means then if your application requires you to use a lot of compute resources then you're going to be paying more. Then as a result of paying that more will lead it to the higher cost of operationalizing your application. Then also um when it comes to uh the creativity itself. So I've talked about general reasoning. So if you need something uh your application requires general reasoning be able to have the ability to be more creative then we know that small language model are a little bit more scope for specific use cases. Then in that case perhaps the best candidate will be a large language model.
We we we have a lot of questions on the chat as well. Um yeah, I think you can be able to pick some of them as we go, but there are a lot of um yes, Kennedy, we have a lot of questions and I think everybody so we have a lot of people also here and everyone has different use cases. They're from different industries, different companies. So see um you know, we're not here to this webinar is not about telling you that use SLM for this task or use LLM for this task. we what we're trying to do is just giving you um you know is is to give you a framework which you can apply to meet that decision um you know based on on your um uh requirement based on your environment right so um I don't think we will be able to specifically tell you that for this task use SLM and for this task use LLM because we we would need a lot of context before making such suggestions so I'm going to skip um those questions but I think maybe one question that struck me was um about data.
Um I see a few questions about data. So um how much data can SLM handle? Like is uh if you have to compare it with LLM like what how is how how does that span? Yeah. So again um like you mentioned in order to be able to solution and recommend in terms of which path and what to take we will need additional more context than just saying maybe you need x amount of data for uh maybe to to to uh uh a small language model and you need x amount for line language model. Remember we got light language model small language model that are like uh uh in tax on the context length itself around 128k and all that.
So it doesn't mean there are a lot of nuances there factors that have to be taken into account to be able to say this is the amount of data that you need. But in general um when you look at small language model and in terms of the infrastructure the resource need then in general um they require less amount of because if you're doing like some finetuning so in this case I'm not sure whether your question is related to um the the data you need in order to to um to fine-tune a small language model. let's say I mean to get a small language model or the data you need when you're already running it for inference.
So in this case I'm assuming if it is the data you need to maybe like fine-tune take a large language model and and and quantize it I mean fine-tune it or whatever it is that you're doing to a very task specific one then that is a very comparative question as well that is hard to just put a number on it that you need this x amount of data but for fine-tuning yes you you need comparative less amount of data as compared to training or pre-training a large language model from scratch so you need comparative less amount of data Okay.
Um I think it was about uh um how much how much of company data can an SLM handle? Um is there is there a fixed data set size limit or does it depend on token limit, training method and rack setup? Actually it depend on all those factors. So that's why when you want to um again when you talk about company data I believe a company you got a lot of different data sources and I'm very sure not you you don't need um all of this all of these different data sources would be used for just one specific business use case they would you got a lot of different data sources you're not going to build a single model based on all these different data sources perhaps you'd be have data sources but again um uh when you want now to build an application there's some some subset of data that can be used for X application another one for Y application another one for A and BC different application itself so that also will depend there are factors that you have to take into account there okay perfect so moving on I think this is uh this is the topic that a lot of people will be interested to know a bit more about um so how does the hybrid model work like um can you can you take us through the hybrid setup um and you know what it actually looks like in practice and um you know how to actually achieve that middle ground between balancing um you know the models.
Yeah. So let me give you like a practical example. Say maybe you got like um an application which um you you again in production environment when you're building an application you want to have a way of smart routing of your task to specific model. So let's say you got uh an application that is without going to any specific maybe like you're processing some kind of um um your questions are coming so the task that is coming when the task come it hits your uh infrastructure when it hits the infrastruct the application itself your LLM application infrastructure let's say if a task requires not a lot of reasoning it's not a very complex task it's like a simple uh question and answering task that does not require deeper reasoning, you route that to like a small language model to be able to perform that task.
If a task that is coming from the user requires more kind of a deeper reasoning uh that requires maybe deeper thinking then in that case you route it to um a light language model. So you have that smart routing or where uh where depending on the task that coming from the user you route it to the right kind of um light language I mean either small language model or a large language model. In that case we talk about smart task routing. And so we talked about like when you have to when you want to use this um a decision whether to use a small language model or a large language model.
We mentioned that it's not a question of a replacement but rather picking the right tool for the right task itself. Hence the need to have this hybrid where you got this kind of smart routing where the semantic routing where you could route it to the uh a task to to the light language model or a small language model that is relevant for it. the cost itself. So we talked about um in terms of for the cost optimization we talked about when you want to um one of the advantage of using a smaller language model is that they take less infrastructure they require less infrastructure to to be able to operationalize and run them.
So they help with the cost optimization. So when you take this hybrid approach where for a simple task you route to like a SLM for a more complex task that requires uh broader reasoning you route it to to um a light language model that in itself help you in addressing the need of the cost optimization here because uh you are not deploying all the time a light language model even for a simpler task. Then as then as a result of that that is going to reduce the pressure on your infrastructure needs itself. As a result of that which will optimally now help in addressing the the the the cost that comes with running your application.
We talked about also uh the latency in itself. So depending on your application and again here we are talking from a very general perspective but again for your each and every specific use case as an enterprise there are factors that you need to take into account but for a general perspective of it when you talk about um the latency so a task that perhaps require very defined well- definfined task once you route that to a small language model then it means with the latency that's going to come with it that's going to be a very faster inference time response that you're going to get out of that that will also reduce the general um latency of your system.
So say if your system you got a system of small language model and large language model within your application itself. So if uh the general uh the compounded latency will be much more less if for example for the specific cases where your application need a small language model you routed it to a small language model it means you're going to get the response much more faster and and uh we talked about also when it comes to the the just the the privacy itself. So using um using uh a small language model which will allow you maybe to uh not really uh deploy anything within your cloud or your application within the cloud itself but you can run it locally will help you in addressing the the the privacy issues that comes with that.
Hence that is in terms of your data privacy the data security you can help can help you in addressing um in addressing such kind of needs. So say like for example like I mentioned at enterprise you'll have different variety of data coming from different sources perhaps you got data that you need to process but again this data is contains maybe PII person personal identifiable information or it is some sensitive data that you don't want to expose it but again still you need to use it for your application in that case you could be able to number one you still have to do the uh the regular steps or whatever you need to do to whether it is a data anonymization or masking that you need to do to this data but again instead of exposing it to a live language model you could be able to use uh for that specific use case use like a small language model for it.
I I hope that addresses the question. I'm not sure if um yeah we have so many questions on Yes. Yes. And we will get to questions in in a short while. Um so I'm moving on to um this segment. um you know let's let's zoom out a little bit from the technology for a moment and talk about people um and carriers because of um you know because all of this the shift to SLMs the new pipelines the hybrid architectures it might be changing what AI and data roles actually look like dayto-day right so from your perspective on the ground how are you seeing the industry or how are you seeing um you know roles change yeah good question So um as all of us who are in this field I think this is maybe something that you are already experiencing.
Uh the industry is very dynamic. It's really moving very fast in terms of the innovation in terms of just the new uh approaches technologies on how to be able to perform um certain specific task. So I think indust is very dynamic and it's moving very fast and that by itself comes with uh even the role shifting in terms of what exactly is expected from you as a let's say as machine learning engineer, AI engineer or maybe um a data scientist. So when we talk about let's pick a role like machine learning engineer. So with machine learning engineer before um the rise of you know um adoption of wide adoption of AI in itself for a machine learning engineer you'd be expected in most cases to build your classic let me call them classic or traditional machine learning model you build your XG boost application or whichever the your neuronet network application right you'll have to think about um I get my data I do my data prep-processing saying I trained my model, I could test it, deploy it and monitoring gone through all that ML of life cycle.
But again right now you see that as an ML engineer and if you're an MML engineer most probably yes there are instances that perhaps you need to build a classic machine learning model but in most cases you realize instead of now building the model from scratch you are going to be leveraging like a large language model for example or building an agentic application in itself that requires a different mindset and a different shift in on how you approach the work itself because originally you'll have to train the model. In this case, in most cases, you're not even going to be training the model, but rather you are either going to be doing something like um uh you know uh either fine-tuning, which is a little bit in most perhaps in overkill, or maybe using something like rag.
So, you have to think about I'm not training a model from scratch, but I'm taking already an existing model, but I'm doing some kind of an augmentation to it, which could be you think about the prompt engineering. So as an ML engineer you need to understand exactly what are the fundamentals how do I do my prompt engineering how do I think about in order to use an existing large language model whether that is a GPT or opas or whichever the model or the Gemini which is there how do I take this model now and build application out of it so out of that you see that you are not going to be doing training as um uh as perhaps you would do in traditional approach to it but rather you'll be taking an existing model and either augmenting using something like maybe rag application to start with and doing some kind of maybe a hybrid of rag you do prompt engineering maybe do something like fine-tuning you'd be able to do that when it comes to um the the the how even you want to approach your deployment right so in most cases if you're using an existing uh let's say light language model uh you are using anthropic or you're using Gemini you're going to be dealing with API I right so you're going to be calling the APIs that you need to call these different APIs in order to be able to work with these models and deploy them.
So that is on dem engineer and again the gap in terms of these different roles they seem to be really the gap is like coming uh sort of like uh instead of becoming wire becoming very small in terms of what is expected of you because as a data scientist before you'd have to think about okay I have to get my model do the training and all this and if you're a data scientist that perhaps you you are able to deploy the model um maybe you are doing both the training and deploying experimentation and deploying of the model in production as well.
But again in this case here you realize that your work is not necessarily now more to do with experimenting uh with um experimenting with uh building models cuz you build experiment you have different versions of the model. you check which one is doing well then out of that maybe you put it in production or work with your ML engineers or your devop engineers to help you put it in production but in this case now you realize that your work is you're not doing that but more you think about I have this um I have this um different light language model how do I look how how do I how do I now work with the existing large language model I I which model do I need for what specific task So you are all things now changes to thinking about what kind of model whether um is good for what specific use case.
So my your experimentation is not about training the model but rather experimentation based on different benchmarks that you care about for your specific business use case. You take a look at which model I need a model for text generation or do I need multimodal model? I need a model for maybe uh video uh I need a model for maybe language generation. So you have now to think about which specific model do I need for which specific use case and then you start doing your evaluation based on your specific benchmarks that you care about for your business use case and that goes also for like when we think about role like a IML engineer.
uh you now you think about more on how as you've noticed in the industry when it comes to even doing like coding and all that a lot of the this AI assistant coding whether it is cloud code um opas uh I mean sorry cloud code or casa or gemini a lot of these different uh air assistant tools are really really be becoming very good when it comes comes to writing code. Now, so the the the the question is not necessarily that you need to spend a lot of time writing your code. You could be able to leverage this kind of assistant to write code for you.
But again, you still need to have the responsibility of reviewing the code, making sure that it is the right code. But the question now is the operationalization itself. How do you now take um design a system? So you are all shifted shift more to thinking about the system design. How do you design a system that can be able to operationalize this different light language model or an hybrid of live language small language model application that you're building? So thinking about the framework uh when do I when do I need to use what in order to build or agite and design this kind of a solution.
Okay. Thank you for painting that very detailed picture. I think it sets the perfect context to understand how SLMs are changing the picture right now. Um and uh yes, so we have a very striking data point here. Um only 11% of ML practitioners have hands-on SLM fine tuning or deployment experience, but 68% of companies plan to deploy edge SLM solutions in 2026. So that gap between the number of people who have hands-on SLM experience and the number of companies looking for it, that's um a massive opportunity, right? So if someone wanted to start closing that skill gap for themselves, what's the most practical first step they could take this week, Kennedy?
Yeah, there there are different um ways or routes that one could take in in in order to uh be able to find yourself in that competitive uh uh would I say spot or advantage the the industry is moving fast as I've mentioned previously and it needs you to really to be up to date with exactly what is going on within the industry but at the same time look at when we think about the needs of the industry in order to now we we we briefly talked about how the roles are shifting where actually the roles are also coming kind of converging in terms of what a data scientist would do what a machine learning engineer would do what AI engineer would do their roles are really converging in terms of the different um task or specific responsibility that you need to take as as uh depending on any of these roles that we talked about.
So uh the question here is how do you really position yourself to be in that spot where you have that competitive advantage having the right skill to allow you to know let's say um given um given uh a light language model or a small language model how do you how do you really take it and customize it for your specific use case? So there are a lot of uh light language models or LLM out there. So how do you take one? How would you decide which is the right one that you need to take for your specific use case?
And if so, what do you need to do to it? Does it require some kind of augmentation, customization that you need to do it? What exactly does that look like? So able to uh gaining such kind of scale I believe puts you in a more competitive uh sort of like spot that allows you to be able to take this language model and operationalize them. And that also goes with you need to think about when you're talking about the operationalizing of these models itself, you need to think about think from the system design perspective uh of it.
How do you take a system and design it and make it work? So those different angles I believe uh again I'm not saying this is the only thing that you need to do but those different angles really help in um putting you in that position a more uh a more competitive position to be able to address the challenges that comes with uh productionizing and using this LLM applications. Okay, perfect. I think you've you know you spoke about staying updated and staying ahead with you know certain trends and skills that are arising um and that uh sets the perfect context uh for the next segment that we want to talk about um I think you know everything that we covered today um um you know shows the pace at which everything is changing right it's just extraordinary SLMs didn't exist as a mainstream concept a few um years ago and now they're reshaping enterp enterprise budgets, job titles and how um teams are built.
So the question I want to leave um with the audience is this given that pace how do you stay ahead of it? Learning on the job only gets you so far. So at some point you need structured comprehensive exposure and that's exactly what we want to share with you next. Right. So what we are about to talk about is uh something simply learn built specifically for uh this moment for professionals like you who want to go beyond the surface level and actually build expertise in AI and ML. This is the professional certificate in AI and machine learning delivered in partnership with Michigan engineering professional education.
One of the most respected names in enterprise technology. Um and let me tell you what makes this program very different from anything else out there. First, the certificate you earn come from one of the world's most recognized engineering institutions and that carries a real weight. Secondly, this is learning by doing. You will be building chatbots, working through AI automation challenges and solving real problems. And third, you get IBM's direct involvement uh because this this program is in collaboration with IBM. Um so certificates, masterclasses and access to IBM experts and finally you walk out knowing 15 plus tools that are actually used in industry today and uh yes so where does this program actually start and how deep does it go?
Um Kennedy it'll be great if you can take us quickly through the learning path of um this program since you are um you know one of our mentors of the program I think you'll be perfect to um take participants through this. Uh yes and I I think this question uh there is also a question on the chat that I was looking at here which is do I need to um know Python uh for any ML language? Yeah. So I think that I'll address that question in um in in the learning path here. So apart from my regular what I do in the industry, I also I'm one of the trainers here and I teach one of the programs that I teach is the Michigan professional uh which is a advanced generative AI application here at Simply Learn.
Um those are weekend classes and evening classes. So in simple terms, they give you that flexibility for to be able to even if you're you're working in the evening, you can take the class or take the class over the weekend. Now the learning path there if we take the one like for example the simply learn Michigan collaboration here for the advanced generative application we what we cover in the class things with we start with like python refresher and that would address the question which I believe um someone is having on the chat. So um yes so when it comes to machine learning uh data science AI Python Python is a very predominant is a predominant language that is used in the field and the reason is that Python comes with a lot of advantages that makes it more um more relevant or more desirable to be used in AI or machine learning.
It comes with a lot of libraries and packages that just makes your life much more easier to be able to use it for machine learning. So yes, you need to be able to have uh some level of knowledge in Python. And we recognize that most of the um most of the learners that join this program perhaps do not come from Python background. Maybe they come from another programming language or they they don't come from uh programming language background at all. They have never done any programming. So the role of the goal of the Python refresher with AI for this program is basically to give you those fundamentals that you need within the Python programming itself to allow you to be able to understand and apply the concept that you're going to learn in this program.
Then we got the the generative AI uh literacy basically that is to uh give you the introductory um you know concept the the fundamentals of AI. what exactly when you talk about AI what exactly does it even mean and there are a lot of nuances to AI so this specific um unit itself in this program is to give you that dive deep dive into what AI means and all different fundamental that it needs then we are covering things like uh machine learning then basically we look at um machine learning what exactly different kind of machine learning how do you build uh different ML application and the this unit also they have got a lot of hands-on uh uh units or classes that you you'll take where those hands-on classes gives you you know more hands-on experience in building ML application that goes for then you got like your capstone project where the capstone project will take a look at all everything that you've learned during your program period then how do you build an application out of that so you take all that knowledge then come up with a a project within the capstone project that takes all that into account and you build an application out of it that you could be able to showcase maybe for your um uh in your in your portfolio of maybe in your LinkedIn profile or in your portfolio and then so it's not just a question of building something to show for your portfolio but again to give you the uh the hands-on knowledge that you do need.
Great. Thank you so much uh Kennedy. Um before moving on to the next slide um just wanted to let everyone know that there are a lot of questions that have come our way. This is a live session. There's someone asking whether this is a live session because they've been trying to get answer to their questions. Please do note that we have a Q&A segment planned at the end of it. So we will try to um answer as much questions as possible. But considering that we are already like you know um we have taken one hour of your time we want to keep the next slides very short and maybe take just uh three to four questions.
So I'll quickly move on to um the next one. Um so like like Kennedy had mentioned you know it knowing just the theory is not enough. The company hirings right now want people who have actually built things and that's why this program includes more than 10 industry projects. you are um you know not just like watching someone else code but you would be building a computer vision model yourself. You would be creating an AI powered HR assistant. You are deploying a chatbot. You're working with real data to solve real problems across different industries like healthcare, finance, retail and autonomous systems.
So by the time you complete this, you'll have an AI portfolio that speaks for itself. And uh the skills and tools covered in this program are plenty and very very relevant in today's competitive job market like prompt engineering agent AI um machine um ML modeling training um large language models and NLP and all of these things. Uh this is the stack that you know AI practitioners need to be fluent in right now. And like um you can see on this slide there are like 15 plus tools also that this program will cover. So um beyond the curriculum, Simply Learn supports your career transition end to end.
Um interview prep, rum and profile optimization, curated job opportunities matched to your skill set. And let me tell you, this is not a common offering by other learning platforms. There might be a lot of surface level job guarantees or raise guarantees, but you need to deep dive and look at what exactly they're offering you to back that guarantee. And that's where Simply Learn's commitment to your learning outcome after the program comes in and really uh stands out. So right so for those of you who are interested um in ENOL enrolling here is the terms of investment the program fee is at 279,989 for learners from India whereas for learners from the USA it is at $4,300.
We have tried to make this as accessible as possible. So for learners in India, you're looking at EMI options starting from under 13,000 rupees a month and for uh US learners under $430 a month. So um for our audiences from other parts of the world, let me take a moment to quickly um share the program link on the chat. Yes. Um so you can uh get the exact fee in your local currency on our program page. Um please do check this out. uh we apologize for not being able to um present all the currencies in this live currently.
So just to recap here is what this investment will include. 200 plus live um hours of live online sessions led by industry experts 12 plus hands-on projects including three capstones across various industries exposure to 15 plus tools like RGBT claude descript um Julius Zapier and Python um exclusive access to hackathons and ask me anything sessions hosted by IBM session recordings 24/7 on Simply LMS for a flexible learning experience a dedicated cohort manager for all your queries and help you succeed you know who would help you succeed at every learning step and most importantly the simple learn career services that I just spoke about right so for those of you interested in enrolling I am going to launch a poll to take your answer um you can uh just simply click on yes or no to let us know if you would be interested um if you um you know if you want to learn more if you want to uh take some time to decide if you have more queries, please do click on yes.
Our expert advisor will be getting in touch with you to um you know uh guide you further. And if you are voting no, feel free to let us know why in the chat. We would generally like to understand what's holding you back and how we can um serve you better. And also you can use the QR code on this screen um to download the curriculum or know more about the program. So while we have the poll live um um Kennedy I think we can take maybe four to five questions if we are um you know little quick about it.
I don't want to keep everyone waiting and you waiting as well. Um so we can just quickly take a few questions. Um okay so these SLMs are they similar to the tools that we used to build in our olden days to perform specific task? Um so is it like you know Canva, notebook, LM I saw this these specific references as well. So I think people um participants want to know um what really like SLM tools are like um Kennedy can't hear you. I think you are still on mute. Oh okay. Yeah. Sorry. Uh sorry you've asked a question there.
you you are mentioning that is the question regarding you mentioned notebook LM and and some other things Canva Canva okay so what is the question um are SLMs like that like are SLMs like these tools is what the question was okay so um depending on which specific maybe um tool or that you're looking at and what exactly an SLM is what a small language model yes there could be uh in when it comes to the implementation it depends on it doesn't mean that is a much one to one on the tools that we talked about that all these tools are SLM but I'm saying in terms of when it comes to the implementation itself depending on what specific use case and what is it that you're building you could be able to have an SLM SLM small language model as an implementation of that but it doesn't mean like when we talk about all these other tools uh that you even mentioned on the slide there that all those tools are a match to one to one on to what a small language model is.
Okay. Um right so moving on why should we use SLM when we can you know use agents AI agents. So why would you use an? So remember an agent what exactly is an agent? So an agent consist of different things. One of the things that an agent consists of is a model. It consists of tools. It consists of things like skills. It consists of different things like memory and all those other things. So it means you could have an agent and that agent instead of using a large language model your agent could use even either a small language model and even a large language model or even only small language model.
It depends. So an agent is not a an agent consist of a model and that model could either be a large or even a combination. It just depends on you could have an agent which is a multigenic system that you've got different uh different models. You've got small and light language model just depending on what is the sub agent within your multigenic system is doing. Mhm. Okay. Um can you can you uh define 30 to 14 point precision? I'm not sure I'm not technically um you know knowledgeable enough to understand if this is relevant to SLM.
um you know topic but you can take a call on whether we can we need to address that question. Yeah. So in in general when uh in training of uh the light language model they're trained on like 32 floating point like precision in terms of the weight. So there is the concept of the weight. I'm not sure if the person asking is familiar with with the weights. So there's the concept of the model weight itself. Then the model weights in terms of the precision when you're trying this models they come uh train in 32 floating points precision itself for the model weight you could be able to um let's say you got that model and you want to kind of use it for like age device for example then you need to have a lower representation pre precision representation of that then you do a thing uh you do like a quantization model quantization so the quantization is what I mentioned is where you could take like a a light language model train on 32 floating point precision on the model weight then you quantize it.
Um the different types of quantization again I'm not going to go into the solutioning on how you do that right now but again uh the quantization itself you'll take that 32 floating point uh precision and then quantize it to um maybe like int 8 or int 4 if you really want to go that far representation of the model unit itself. Then that allows that model to you know um in terms of the memory usage itself is going to be much more less which allows you now to use that model for um maybe like on age use cases.
Mhm. Okay. Thank you for that Kennedy. Um the last question that we will take is from Anubel. Um how is sensitive data protected when tasks move locally um you know move between local SLMs and the cloud-based LLMs? How is sensitive data protected? Number one. Uh number two, do you think hybrid AI will become the standard architecture for future enterprise? Yeah. So interesting question. So uh if I take the first one. So again when we talk about um using a small language model that you are able to you know operationalize within your local infrastructure itself. Yes, there are a lot of nuances and the things that you need to do to make sure that data is secure.
But at the same time comparatively if the data is not really leaving your uh your infrastructure compared to data that perhaps you need to share somewhere uh maybe send this data to a light language model somewhere or something like that or a light language model get access to this data that exposes in terms of just the security risk that comes with that is comparatively higher compared to if you were to take a small language model that you are managing and running on your local system itself. So the the advantage here comes in the sense that a model that you're learn running on your system on your local system it's a it's much more easier in terms of how the the the the blast radius in terms of the security exposure that it has compared to one which you you are not the one really managing and controlling.
The second question was with regards to um again remind me the second question. The second part of that is hybrid is hybrid going to be you know the future architecture future standard architecture in all enterprises. Um again I'm not going to overgeneralize and say it's going to be for all enterprises. Just depends on exactly what is it your business model and what exactly is it that you're doing. But it makes uh more sense. to to I mean to take like an hybrid model uh sort of like approach because with a hybrid model you use choosing the right kind of model for the right specific use case or the specific task and in terms of um which one is more optimal and making sense then the hybrid model make more sense because um for a a use case where for a task that you just need a small model there is no need for an overkill and use a very powerful expensive model for it.
So the hybrid model makes a lot of sense and I do see like um there could be a lot of traction in terms of the adoption of hybrid model. Okay, perfect. So with that uh I will also end the poll to um you know show your interest in enrolling in the program. Um so we have a very very exciting uh webinar that is coming on the 17th of May. It is from 7 to 10:00 p.m. um uh IST and 9:30 a.m. to 12:30 p.m. ET. It is going to be a live 3hour workshop where we're where we're going to use Claude um to code um um you know and we're going to uh build very interesting multi-tool workspace.
Um so it's going to be a really interesting session and it is um you know planned in a way that it is very beginner friendly. It is for non-coders as well. So um it's going to be led by one of our other trainers and a AI um engineer Timothy His. So please um you know you can scan this QR um and um you know register for the webinar or let me share the link as well for everyone wanting to register for the webinar. Great. Okay. So with that I am going to launch another portal where you can enter your full name.
We will be able to generate the certificate and um share with you within 48 working hours. Just give me a second. Yes, the phone is live. Please drop your full name um the name that you want um on the certificate um to how how you want it to appear in the certificate. Uh we will generate the certificate shared with you uh on email. In case you're not able to access the poll or if there's any issue, please feel free to write to us at webinars.net and we will look into that. Okay. So um with that we are coming to the end of the session.
To all the participants, thank you very very much for spending the last 1 hour with us. We hope that you'll be able to apply the learnings from the session as well as what you will get from our professional certificate in a IML to get ahead in your career. We know that we were not able to address um specific use cases because you know um you know this session was about introducing you to the concept of SLM and um you know leaving giving you a framework which you can use to uh make the decision. Uh so yes um if there is anything else that you want us to um do webinars on or um you know any questions that you have please do write to us at webinars at simple learn.net we would love to hear from you any feedback that you have as well.
Kennedy thank you so so much for your guidance and sharing your insights and knowledge. It's been an absolute pleasure hosting you. Do you have any final remarks to share any advice uh for our learners before we wrap this session? Yeah. So I I think my closing remark here would be uh the industry for those who are in the industry or are interested in the industry it is a very dynamic and interesting industry and there's just a lot that you could be able to do with this you know with this technology and I would encourage you all to explore how you can leverage either what you are currently doing in whatever role you are in and how can you be able to leverage AI in order to get better at doing whatever it is that you do.
So uh that's be all from me. Uh thank you all for finding time for joining today's webinar. Uh thank you. Yes. Thank you so much once again everyone. With those wise words I am ending the session. Thank you.
More from Simplilearn
Get daily recaps from
Simplilearn
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.
![Machine Learning With Python Full Course 2026 [FREE] | Python Machine Learning Course | Simplilearn thumbnail](https://rewiz.app/images?url=https://i.ytimg.com/vi/UW3-UUyh3Lo/maxresdefault.jpg)


![Machine Learning With Python Full Course 2026 [FREE] | Python Machine Learning Course | Simplilearn thumbnail](https://rewiz.app/images?url=https://i.ytimg.com/vi_webp/EkhpHYXMuAk/maxresdefault.webp)





