Mustafa, Waldek & Adis are talking to Lee Stott

The Road to ECS - S2 E2: Lee Stott

A podcast episode discussing Microsoft’s ECS conference, AI trends, and how developers are building and evaluating modern AI systems.

ECS .
Recording

Welcome everybody to the another episode of Road to ECS. This is season 2 episode 2. We are back. Uh last time we talked with Vessa and we had a chat about uh last year's ECS the eur European collab summit. uh how we all experience it because like it's been a few months already. Time flies. We are heading towards uh the winter time now. Uh nonetheless, so it's still a few months ahead of us until we will have ECS 26 and we have the mastermind crew with us thinking and scheming and coming up and

trying to one up themselves as they always do. Nonetheless, they are not sitting still. So they are cooking things for us. But in the the meantime, we try to a little bit like pro pro their mind, get them to open up a little bit, share a little bit of the, you know, behind the scenes what's going on, what's he happening. And in parallel, we also bring guests like today. We've got with us the man, the myth, the legend, Lee Thought from from Microsoft. Lee, how about a few words, just a few words

about yourself, who you are, what you do for work, uh, and we take it from from there. >> Yes, sounds good. WorldX. So yeah, hi everyone. I'm Lisa. I'm a principal cloud advocate manager at Microsoft in our core AI team. So my team is looking after the AI core. So things that is called Foundry. So that's whether it's foundry local on your local device to using Azure AI foundry in the cloud. So with the team who sits behind that builds a lot of the demos that hopefully you've seen AI tourn Ignite and build

and hopefully you're going to see some of our content at ECS and some of the hands-on labs. So really looking forward to it. >> Excellent. So you you are doing AI and LLMs and AI ML LLM. Like we can just keep throwing acronyms for for and and let's just call it computer intelligence. >> Computer intelligence. Come on. Computer intelligence. That's our long running gag from the last time we had Lee on the show. We challenged oursel because like back then AI was kind of a hype, right?

It was very much like you couldn't probably utter a sentence without hearing AI of some of the related uh acronyms or words. So is so like um you you you know what how about we make ourselves a challenge. We make deliberately our lives difficult and aim not to mention AI or L&M at all. Instead, when we want to talk about it, we will use this phrase computer intelligence that nobody else in the world does just to make it hard for ourselves and see how often we will stumble upon ourselves. So like that's

we thought that that would be something that we would you know come and go and it's there to stay. So for from now on we refer to it as computer intelligence. No nobody else in the world does that. We do that's our thing now. Anyway, so we will talk today about computer intelligence. You've been with ECS for a while. Um, what was your la what was your impression of the last one uh that that was in Germany in May 25? >> Yeah. So, number one, I loved the venue. So, I have to say, you know, ECS is one

of these events where I love registration. I think you've got the best registration process ever. You walk into the building, you do a barcode, your ID card's printed out, and away you go. You're into the expo. So, that was like the most pleasant surprise of always attending a conference is like, what's that registration queue? How long do I have to wait? Straight into the expo area. Again, I love the variety of the content in the expo area. We have like the the secondary stage where

Mustafa was the key MC last year. Um, we have lots of exhibitions and booth space. So that's whether from like startups to large organizations and the variety in the stands is amazing. You know there was there was a bar there with a shooting games and cowboys and big furry animals to you know the standard expo space. So you know I love that that framework and again the content was just amazing. I think you know the variety of the content from both community product groups and partners was stunning was stunning. So I

thoroughly enjoyed myself. Um we had some great keynotes as well. So Marco Castellina who I know can I can I do can we say he's back for this year? Marco's coming back here first. >> So he's going to be talking a lot about computer intelligence and computer intelligence products which come from both Microsoft and third parties. Um, and again, you know, I thought it was really exciting to be really, we were at the cusp of computer intelligence last year. Um, and it's going to be really

interesting to see like, you know, we're a week away from Microsoft Ignite. There's going to be lots of Ignite announcements at Ignite, lots of new products, lots of new services, lots of new showcases. And then we're going to like, you know, ECS, and then we'll be going into Microsoft Build after ECS. So this is quite interesting as an ECS because it's the first ECS post uh pre-build. It's usually post build. So you know we usually do ECS after build. So we might be showing some

things that are coming. We might be showing some things that may be potentially new. Um but you know we won't be doing announcements at ECS but we'll be talking about some of the future things to watch out for. If if anything else, Lee, a compliment to you, your use the the mastery of you using computer intelligence. It's it's right there on the spot. It almost sounds like >> you you practiced. >> I might be coming. >> You knew I'm going to be on that show.

These folks are totally going to put me on the spot again. Like not falling for that trap trap again. Uh going back to what you said first, if anything else, folks, you heard it here first. If you want, if if there's one reason for you to come to ECS, uh experience that great check-in experience. >> I do hope that people come for other reasons too, but yes, we do have a good check in experience. >> I mean, it is a whole experience, right, that counts. But it it is a part of of it like how the event kicks off and then

entering the expo all the things that are happening there and of course content and at this point I I'm going to give it a try with VX challenge and also wanted to outline like ECS contentwise is separated in three different events right and one of them is and here I tried European computer intelligence and cloud summit and I need to mention here that Lee is part of the content team there and he's one of the people responsible for for that awesome content that that we do deliver during the show.

So with that that in mind, Lee, I want to ask you like what's your strategy in ensuring that because like you're picking the content already. Now if there's one thing that we learned so far in the computer intelligence space is that things evolve like almost every day there is some new announcement of new tool, new model, new framework, new library. How on earth are you like what is your strategy to ensure that whatever you pick now is going to be valuable for for folks who going to uh attend event

in May next year. >> Yes, great challenge. So yeah, it's a huge challenge. So we're actually meeting in December to do the content review. You know that will be postgnite. So again the big announcements for early next year will be made at Ignite. So we'll we'll have very good clarity on what we're expecting to happen. I think you know the biggest issue in any of this technology today is really around you know product name changes. >> Um you know a new model a new day or a new

day a new model. I think that's that's the gnome. I think you know the key things is really around you know that stability of how do we demonstrate the different aspects of AI. Oh, I said it. Sorry. Computer intelligence. Huh. >> For a moment, I had no idea what you were talking about. I was like, "Oh, yeah, of course there's a euro in the swear box." >> Oh, yeah. >> The computer intelligent box. >> Let me try shortly. We have a spin-off conference in happening in Sar in end of

March, early April. Thank you, Lee, for being there with us as well. Uh BC to your question um the the day that Thomas Mustafa um Omar and me finish the content the very same day or the next day Microsoft is now announcing uh Azure AI foundry agent framework. >> So of course we didn't have a single session about that. I mean and conference in March and April obviously we need to talk about that framework. I'm looking at Mustafa I was like call Lee now >> and we came through and found found

someone >> right and yes this is definitely this definitely is a challenge I mean I would say it's an issue because we always but we know I mean people who are following u who are following stuff on I never talk my or for example For the past year and a half, I my mainly talking point when I'm talking in the conference was semantic. We as a company have invested a lot in semantic and build semantic kernel. Yes, we will be changing the stuff. Yes, I will be able um I will have evolution of what I'm talking

because uh semantic kernel will be part of the uh Asian framework or will will be used for that. But that's the industry we all choose to live in. Computer intell intelligence is very very dynamic space now. So um >> but but I guess yeah so I guess that bring us to an to an interesting point and and Lee I'd love to take your take on it. We're in a space where things change a lot. we rename product there are new frameworks libraries coming out basically to cater to the ever evolving

needs uh that we have and also then the abilities that we get from LLM and cloud services and whatnot right so on one hand you could make a point okay so when we plan a talk or a presentation you want to stay on the conceptual level so let's let let's take uh for um example is what you said Right? Like we have semantic kernel being the library but what it allows you to do it allows you to orchestrate things. So you could make a talk something like orchestrating agents right and that will at that level it's

going to stay relevant for the coming five years because that need will not change you will still need to orchestrate LLM whether you use this tool or the the tool. But on the other hand, there might be folks who are looking for specific things and basically filtering things on keywords and product names because they want to learn about exactly that thing. So like what's the tip for folks who submit RFPs, submit talk, write abstract, write like create content like your your team like how do you handle

that in your team? >> Yeah. Yeah. No, it's it's a a really really great point WCK and I think you know it's Yeah. So you know it's it's it's like teaching. It's going back to the fundamentals. So when you create your CFP for an abstract, talk title, etc. is try not to be product specific because products and models more importantly models will change. Models are starting to get deprecated now. So you know that we're the the eve of that model deprecation. What do people do

when GPT35 gets deprecated? It's no longer available as a model. So recommendation one is keep your abstract you know your session title and your session description at a high level these you know conferences allow you to use tagging and keywords. So if there is specific products in your session when you're coming to you know start advertising it add semantic kernel again you know we're in an era where things aren't necessarily going to go away. So frameworks aren't going to go away you

know it's really around that what is that story of the orchestration and if you do change from semantic kernel or autogen or lang chain or lang graph to using agent framework or using agent framework in a combination with those that's then how you you know you add those tags to that description but more importantly it's how you create that social media and marketing around your session which you know ECS can help promote you can promote you can get your you know your organization or your

friends to promote as well. So I think it's much more of that highlevel abstract which then allows other people to go oh yeah this sounds interesting or I'm really interested in this aspect. I think you know the key areas for me around growth at the moment are model discovery >> and when I say model discovery it's model discovery benchmarking and evaluation. You know, one of the key things that I really want to try and get across for the rest of this year is evaluation le design and

specificationled design because I think that's the the era we're now moving into. >> That's that's something that's something very intriguing and close to my heart. By coincidence, I've been um working on something internal at Microsoft for the last three months now. I guess re uh re re related to evals and that's also something that we that I came across first when we uh start started to integrate the small language model with def proxy basically which model is the best for of for me which

one should should I use right because like there's ton of them every new day feels like there is new model which there is >> how do I decide which is the best and there are benchmarks and there are things but it very much feels like you know I want to buy a car and somebody done Evo and say like hey this car has a range of 800 miles like that's cool when was the last time that I drove that far never so how valuable is that test of somebody else under you know academic lab circumstances environment like

there's no draft no you know the roads are perfect and they have a number assigned to it like what does that mean to me? This whole concept I think is invaluable by coincidence. I also assume submitted a talk on exactly that to ECS. I wonder if it will be picked. Fingers crossed. Maybe it will, maybe it won't. Maybe we will join forces. Maybe you will hear me uh share my take on it. Uh but yeah like that that is if that is one thing that it would be great to get people mind to is to think about that like start with

that which one is the best for me not just take at face value somebody else might not apply to you at all. Let's also not forget there also sometimes business constraints uh which we uh all need to follow uh from customers from people from uh >> I know where you're going with that mist >> well I I actually did not want to trigger this one but uh for the past six months I have heard from two customers from run events from our company that they are going to take our AI computer

intelligence product if it's based on mistral uh it's uh so there there's obviously it's not only technology which is uh which is uh which is deciding sometimes those are business constraints uh which which we need to see shortly but I want to just circle back shortly back to Lee he said um in my team we are discussing it I had and I was thinking you said I had a privilege to get to getting to know few people from your team uh Carl Corey awesome people and we feel very thankful and privileged that you guys are with us

now for a few years uh with ECS and uh Lee can you tell us more what are you guys doing uh how does it look daily for you for Corey for Carl for the other team members what's what's the work that you are doing for Microsoft for the community uh all the all the great articles you are publishing that I'm always reading and uh demos that you are doing tell us a bit more of the internals uh >> that answer goes south so like I Start the day, open up the laptop, find out about 20 million new models, and that

that's the end of my day because I'm I'm spending the rest of the day trying to understand what's just being buried. Yeah. So, so you know, it's interest so myself and WC are both in the same group within Microsoft. We're both within developer relations. You know, our key focus is about securing the future the platform. You know, that's that's our our goal is to to work with developers, whether they're AI engineers, developers, or people new to the platform. to really give them guidance

on how to use Microsoft tools, technologies and services. We've got three focuses. We've got a focus around content. So, we have to build as a team content. So, my team, so poor car lot at the moment is buried in Microsoft Ignite. She's running a Microsoft lab at Ignite. So, a brand new lab. So she's heavily involved in like validating that lab, making sure it works, making sure the documentation's there, you know, validating it with our ambassadors, with our MVP communities and with our student

ambassadors who get early access to these things to validate and and check from that developer zero perspective. The next one is really around community. So you know, we we I've just literally got back from Frankfurt. So last week I was in Frankfurt doing our AI tour. The team's somewhere else this week. I can't remember off the top of my head, but they're somewhere else this week doing AI tour and we're doing AI tour now throughout the year all the way up to pretty much ECS and build. So, we're

we're traveling the globe presenting um a variety of content that we've built um which is really around, you know, computer intelligence. It's about how to use these models, how to build solutions, how to use Foundry and and we're looking at really from from different aspects. So you know we're looking at from a local perspective. So we've got a great session on using Microsoft AI toolkit which is VS code extension and we talk about evaluations and observability starting with GitHub

models and doing evaluation from a you know just from a physical what does the model how do models interact or comp uh compare direct you know side by side to then building very simplistic agents which we then evaluate from both a manual perspective and an automated perspective. We've got things like fine-tuning. So we talk again about, you know, which model to choose, you know, if I'm going to do distillation of fine-tuning, which model should be my teacher model, which model should be my

student model, and then what techniques and services do I use for distillation? And we've got lots of hands-on workshops. So we've done content and community. The next key area is feedback. You know, we want to do these events. We want the feedback. You know, if all the feedback was good, me and Wallet wouldn't be here. there'd be no use, you know, we're not required for improvement. >> No room for improvement. So, we want really the good and the ugly feedback. You know, the good, bad, and ugly is

what we call it. So, good feedback. Yeah, we we've done a great job. We love seeing that. You know, we love seeing that feedback. We love NSAT, etc. about our sessions and our content. Ugly feedback is really key because that's something we can fix. It's like, you know, the docs didn't work for me or this sample doesn't work or etc. So you know what we do want is >> exactly you know we want them to give us feedback on our repos. So everything we try and do is we try and do it under

open source. So again we we've got lots of open samples. Worldex got a great project with dev um with the dev tools. I've got a great project with like a customized translator for languages. We want people to utilize this. Give us the feedback. Try the tooling. Try the products. Try the models. Um and and really, you know, let's have that feedback. If it's bad feedback, give it to us. You know, we're all grown-ups. We want the bad feedback that will help us learn. So that's what we do, you know,

on on a daily basis. It's it's super varied, honestly. You know, it's like World says, something new gets launched or something new is going to change, we have to be that developer zero. We have to test it. We have to document it. We have to build samples on it. And yes, it's very frustrating. You know, we live in a technology world where, you know, we are we are playing with some of this technology before anything has been done. You know, there's there's usually no docs.

We chose that life. >> We chose that life. Exactly. You know, and it's but it's great fun. >> Yeah. >> You know, so for me, it's always about that passion, enthusiasm, and most importantly, tenacity because, you know, things go wrong daily. What a movie. >> What a movie from the the 90s. You are the docs. >> You are the >> May I say an anecdote from this morning's team meeting. And why I'm saying this is because it's awesome to

see how we are transitioning from what we started two years ago. Everybody was hyped about chat GPT 3.5. Oh, it's awesome. But what would I what what would I what do I actually what do I need it for? what's my problem for this solution and I thought we were for quite some time in this what's my problem for this uh solution but now with the time with this normal maturity cycle I mean it happens with all great technology and this might be the break this is a breakthrough technology over the past

few decades I mean let's let's be just clear about that one um with the stuff that we are doing I demo this morning meeting prototype we have a email builder in run events. So you can send emails to your attendees, sponsors, whatever. However, our customers are people in event professionals, event organizer who are not necessarily people who can format HTML. I mean that's not that's not their core uh skill. So what we what I demonstrated internally in the team this morning was a pro prompter key to actually write and

format uh format emails. uh which will like change the color of the uh header box to orange instead of selecting the header box and changing the color to orange. It's a computer intelligence. >> You only need one prompt. Make it pop. >> Sorry, >> make it pop. >> Yeah, it's the point is >> free agents. >> We have a reviewer and a validator. >> Yeah. >> We are coming to the stage where we actually can use this. >> Yeah. where people see that it's not

only hype tool for the geeks. It's actually a very I mean the the the mood is settling. It's a genuinely useful business tool if you know how to use it and when to use it and how to apply it. >> But that's so that's an interesting point and I think that that comes back to the panel discussion that me and a few others had at ECS last year which is about like uh using a AI computer intelligence in your work. Uh it it comes very much down to like it's a a muscle you have to flex and to use like

to use it efficiently you need to train. But it all starts and it also like it it either works or breaks on the ability of you you having the awareness the mindfulness of it's there to help. So what are your again putting you all on the spot whoever is is willing to answer and and I guess Lee like you don't have a choice because you are the guest so you have to How do you how do you raise that awaren even even though right it's your core the core of your work like you don't get to choose not to use it but it's one to

kind of research it and build the content around it and then two to actually use it and embed it in all of your work. So, what are your go-to tips, strategies, techniques to keep it top of mind like, hey, I need to do X. I can do this with LLM. >> Gosh, I think you know. Oh, god. So, I think, you know, yes, it's about So, let me let me put this question back to you first of all. Okay. So, you're all developers. Yeah. >> Yes. You all >> I am as is PowerPoint. If I ask you the question right, are you

an individual contributor? >> Yes. >> Okay. Are you a team player? >> I like to think I am. >> We have to be. >> Are you a manager? >> I am managing my kids. >> Okay. >> But are you managing your agents? >> Not yet. No. >> No. >> Okay. So I think that's going to be the radical change that's going to happen over the next six months. So I think I'm hoping when we get to ECS, you know, we're going to be quite a few months ahead and I'm going to ask that

question again. You know, how many of you are managers? And I expect the majority of the room who are developers to say, yeah, I now manage a group of agents. >> Really? >> Yeah. >> You really expect them in seven months time to re really? >> Yeah, I do. Honestly, I do. And I think it's going to be a radical change that the development ecosphere is going to go through. I think there's people who are doing it today. Now I know you're doing it today, but you're not realizing it

that you are now building a group of agents that are being orchestrated to do a task >> but they are have a human in the loop and in in 90% of the cases today it's still the developer who is that human in the loop to approve that workflow. So you are managing a team of agents >> which if you know so that's my biggest um viewpoint at the moment is I've become much more effective because now I have like doubled my team with agents >> but the thing is to counter argument

like to counter spell at you are a manager for work you have the manager genes you manage every day so it's top of mind for you so transition for you to become manager of the agents is less the effort because you manage folks so it's kind of it's the same but like it's another audience >> you could even maybe maybe that would be a great great talk for you just sharing a thing and I don't know if there there's even a room but me management for uh the rest of us where you will

teach people who aren't the intrinsically a manager to become a manager because that's a different skill skill set like trying to explain what you have in mind in in um um writing because that's what you need to convey your intent to somebody else >> instructions instruction. >> Yeah. >> Yes. Right. And I mean like it's one to like prompt building prompt engineering like that's the different game because like there are prompts. There are prompts like if you look at truly

agentic prompts that's like almost normal. >> Very interesting. Okay. So you know we are giving design prompts. So if you look at like specification le design you know you're you're building like you know everybody's done TDD you know agile waterfall etc. You know and they've been taught that through graduate school etc. You know what we're doing now is we're saying to to you know these people write a specification write a prompt to do X Y and Zed. So a set of instructions but

why can't you also write business KPI instructions to validate these agents. So you all not you all may be running balance scorecard or OKRs or you know have a set of KPIs and metrics that should also be part of your key specification document because then you can go does this solution or system work from an engineering perspective tick does it work from a business KPI perspective where are the economics where the gains where the values so I think we're going to be changing from this and you know I think it's going to

take a long time to happen but I think this is going to be the next paradigm time of change is where developers are going to have to start to think more around documentation which most developers hate but thinking about that instructions base but then also what is the effort and impact that's a good word for you isn't it impact what is the impact >> oh you mentioned the I word >> doing this activity so what's the business gains what's the business profit what's the bottom line margin we

can help >> because as soon as the developer starts to get that in the mindset uh it's much easier to go and have that discussion with a business decision maker around should it be this model, this technology or this service versus this model, this technology or service. >> It's a great >> but I think like like that isn't new because this whole you know business thinking that's been around since I began my professional career 20 years back. It was already then yeah you got

to talk to BDMs and it kind of yes there are are architects in the team who do >> but the devs in the team would basically end up with a bunch of tasks to complete like that was the reality and now they are kind of like they go a level up where now they need to explain the tasks to the army of agents that they will have devs are now the devs now read need to >> learn how to makes specs which they refused to read for the past 20 years. >> But now, so now the cool thing is, do you all folks know the kind of movie

meme of a father asking his kids to write instructions how to write how to make a peanut butter jelly jelly and and sandwich? >> And it's the same as how to change a light bulb. But that's that's exactly I think that that would make a great workshop where you collectively ask folks okay we're going to write a spec for this right after five minutes or so like and imagine that you've got LLM that that gives them like the almost real time feedback ability to say so you submit your spec this is what I will do

it's like no no no no but you you you said you want me to do this so then they they go back and rewrite and basically at the end of the A they end up with a spec and understanding what kind of thinking they need to employ to get the outcomes that they need off of the the the uh the the ideas that they have, right? Because like they need that feedback loop because there's there's no other way to learn this than doing, right? >> Oh, you know well saying right so this is where it gets interesting. So there

is ways of learning. So again, it comes back to observability and evaluation. So it's like you do it, you evaluate it. So it comes back to that evaluation design. So if you build a specification and build it and it doesn't work, you're not going to know. You're going to have to have something that tells you it doesn't work or it doesn't perform. So again, you know, you need that eval basis to be able to do that. So I think it's really interesting. I think the biggest challenge as well that

developers are going to see is just the rate of change. So I was having a really interesting discussion after one of my sessions in Frankfurt last week where an organization came up to me and they were like, "Oh, this is what we want to do. We want to take these specification documents which contain images, tables, charts, etc. OCR them, you know, then do an LLM that inter interfaces with it and and you have the ability to ask questions. >> Did they have them in print?" >> Yeah, it's all printed.

Oh my god. Germany. >> In Germany. It has probably >> in Germany. Who's in Germany? >> Maximal. >> Yeah, you probably that can actually get a fax. >> Yeah. >> So, you know, I was like, yeah, you can build a solution to do that, you know, etc., you know, and they've been trying to do this for the last few months. And then I said I said like, have you seen the new Mistral OCR model? So, Mistral now do a document OCR model LLM. And they were like, no. And I was like I

think you might do what you want to do out of the box. So I think this is going to be the other challenge is that you know people are trying to build things and there is new models appearing that instantly solve that huge complex task that you were thinking about but then it's like how do you have that agility in your organization to say you know let's let's keep let's keep that future horizon scanning. I mean this is this is not even future this is present or even past. I mean we all know

the story from the year ago that year and a half that uh actually Price Waterhouse Coopers PWC have been building have been refining stuff and being refiners on uh and they they published a really great uh blog on this one. how they lost money actually as a lesson learned to everybody else on chat GPT35 and they invested a lot of money in refining in refiners in building up of those of then CHP 40 came and was better out of the box than anything they did with a lot of money with GPD 3.5 >> you don't need no stinking GPD35

well I mean who that was what was available to them at the time didn't exist said tailing didn't exist. >> But that's also an interesting point, right? Like >> they build stuff on it, but then GP40 came and said, "Yeah, no." >> But that's also an interesting point. How do you decide whether you go something with the risk that whatever you build might become absolute because somebody else is going to ship something that is way better use >> or you say, you know what, we don't know

if it comes, it comes. If it doesn't, it doesn't. Like, but we have a problem to solve now. No B we we need to stop I was thinking about the from the run man's perspective a lot about that the moment we stop thinking on of how to use AI intelligence I didn't hear you >> and start thinking how do we solve this problem and could computer intelligence help us in solving the problem then then again you don't care what's inside is it some missile OCR model is new awesome

mal medium which I love. I mean quickly became my my favorite model or is it using classical machine learning uh that we know for uh decades now you when you stop care about what flavor of computer intelligence you use and when you start caring to solve your business problem I think this is where it turns into value. Yeah and I think this is it's a key point that you know I think organizations need to have like future horizon scanning con consistently. So again, you know, you could build it into like your learning,

you can build it into, you know, how you're you're you know, you creating that awareness within the organization. And I think that's one of the great benefits that we have world is that we are constantly doing that. You know, we've had so many good discussions about >> even for us like the things are coming left and right and you would say that we're we're exposed to it even more >> because like we need to form an opinion it feels like on everything. >> Yeah.

Right. Yeah, and I think but I think that's I think that's a good thing and I think you know we're at this stage now where you know again using specd driven development and evaluation led design is it's that continual evaluation of and again you know it's like I love asking the question right I asked this question in Frankfurt and I'm just always amazed at the moment like okay you've built a generative AI solution it's now in production or you know you people are

using it are you looking at how people are interacting with your system. And honestly, it's like I've said something like I've swore to the audience. They're all like honestly like there's like usually three or four >> heads picking up their phone right the inside logs of how people are asking questions with our agents because like >> you know if we'd start to do that again you know evaluation le design we start to understand >> is there like really quick simple fixes

we can do that answer 80% of those problems yeah easy but until we start to think in this mindset uh we're going to be rebuilding the wheel or not not building the wheel. >> That is an interesting point because like in a way the concept isn't new like for years since years we all always had you know telemetry user flow analysis. So but it feels like it was less of an an issue because like the app worked in a predefined way. We built them to have specific flow. it was deterministic

meaning like you start the app five times you will have the same route that's the way to accomplish something and the only way maybe you could have uh or maybe the only thing you uh you might have realized at some point that >> maybe people who do did that part also do that part and your flow is is inefficient and you can streamline things but now with LM that's no longer the case people can throw anything at it because it's just a token streaming machine right So like I think the

importance of that has increased that many times you got to be looking at it. You got to understand like yes you designed it for something but how it's actually being used and where is room for improvement. >> Yeah. So I think observability is going to be a key thing going forward without a doubt. But that this kind of like we are just seeing this even more accelerated because when you look at last like last 15 20 years things obviously changed in how we do things around computers in general like we went

from products to to more of a software as a service part right we don't have those cycles where we develop a new product for four years and release it. it's continuous thing and then with LLMs nowadays it's just accelerating even more like so it's just it's a natural process and things are speeding up but it's nothing new like if you were in software development let's forget about uh computer intelligence and LLMs and everything that's happening but even in terms of like just building a product on

on something it's like frameworks they are changing constantly like if anyone that's in in Angular for example like every second sprint you need to review if you're going to upgrade your to to to a new version. So that's things are constantly changing and this is just another step in it and maybe a little bit more about that. >> We agree that we are going to be polite in these uh in these webinars and not to use curse force and you just mentioned angle that's not fair.

That's just an example. It it applies to >> nothing wrong about angular community still using it. >> I have a I have a product in Angular. So that there >> there you go. There you go. Preaching to the choir or speaking from your own experience. All right. Like we talked for a long time. Uh I think it's kind of time for us to get back to the end. But there's one more thing that I want to ask you e and I think I might have something but let me see to what extent my gut feeling

matches with the reality. What's the If you can name one thing, just one, because I know there are many, but you can name only one. There can be only one. What's top of mind? Like what's intriguing you right now? The last days, like there's this one thing that you can only pick one. You you you need to pack pick your favorite kid. >> Um, if I can pick one. So I'm really excited around the opportunity of localized models and you know this world you're in one of the V teams. So for me

I think the opportunity of localized models and intelligent applications. So where and what I mean by this is like a user goes to a store downloads an application and the application has an inbuilt pre-tuned fine-tuned language model that allows you to interact with it and we've sort of seen some hints to this like with Adobe. So anybody who's been using the Adobe products and you know looked at the latest Adobe AI computer intelligence solutions said it again. So computer intelligence solutions where if you think about this

from you know a standpoint of like a business admin you we've seen like co-pilot in Excel and it's just like phenomenal what it can do when it has that contextual understanding of the application and the contextual understanding of the data and is able to provide that from an end use perspective. So, you know, we all use email, we all use messaging apps, we all use collaboration tools. If you start to add intelligence to those, I think that's going to be really exciting. And the ability to do this offline while

you're sat on a train or on a plane or watching your kids in a swimming pool is going to be amazing because I think that's where people now are sort of going, "Oh, you know, I really wish I had this." And we're at that cusp of being able to do it now in a in a very simplistic and a lightweight form. You know, we got we got to a stage where there's some amazing models that are super small. So, yeah. I I was totally off. I was I was totally wrong. I thought about Evals. Like, you

would be 100% on evals, like picking the right model for a job. So, my my if if if we had a bet, I would lose. >> Oh, okay. But I guess that that is it. Thank you all and we will be back with episode three and another guest. >> We will very soon. >> Final words. Any other final words? >> Sorry. >> Any other final words? >> Famous final words. >> I need to learn to play guitar because you've all got guitars in the background. >> We don't actually.

You are not wrong for that. Are we having like an ECS jam session? >> Do that. Actually, actually we we were joking about that because we have got few more people in our company actually play really good that uh you know at our uh our parties ECS party that we do one song as a ECS team. >> Yeah. Yeah, that'd be quite cool. >> That that we might actually do that. No, thank you very much. Lee, see you very soon in person in >> Yes. See you soon. Yeah, we we'll be

vetting all those sessions that people have done the CFPs for. So, thank you to everybody who submitted >> and believe it or not, we will not be using computer intelligence to do that. We'll be uh doing our >> human intelligence the human in the loop >> and to see see what is there for the um computer intelligence and cloud summit. It will be Lee, it will be David Dobridge, it will be Kimo Kimo Forbes uh and Barbara Forbes. Um and um the other teams are all will be different people but for the computer

intelligence will be for all of you who submitted sessions your eyes should be pointed to Lee, Barbara, Damir and Kimmo. >> Perfect. Well, I mean I think we are all excited to wait and hear uh about the final pick. So stay tuned for that I guess. >> Yeah. Do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do do you already know roughly when you will announce them? I mean of course you have to do. When will you announce them?

Mid December. Definitely before >> December. Okay. >> We are we actually >> come early. >> It's actually it's actually even too late for us because we want to have it earlier. But it turned out it's very difficult to to uh merge and to overlap three times of those people. I mean to have KBO and Barbara and Damir and Lee free in the same time was kind of a miracle. So uh >> if only we had technology that allows you to work with them. >> But there is a very different dynamics

than you are. >> Oh totally. For one, you cannot throw things at each other. >> Well, we don't be throwing things, but there can be heated heated discussions. I I do give I do admit that. >> Passionate people. What else would have you expected >> on that bombshell? Thank you all and we will be back soon. >> See you soon. >> See you. >> Bye.