Transcript: The Road to ECS - S2 E4: Marco Casalaina
Hey everybody and welcome to another episode of the road to ECS. We are back in a new year. Is it new us or not? Anyway, I think it's the old us. So, it's me Adis Mustafa and today with us we have a guest Marco. You've been with us at the last ECS in person last year in May. For those folks who haven't seen you live on stage there. How about a few words about who you are and what you do? >> Well, I'm Marco Castellina. I am VP products of core AI at Microsoft and I am also AI futurist at Microsoft. And
you know, concretely, what that means is that I get to be the first person to play with anything new as I constantly am doing. That's a really broad and all-encompassing term and I was like I was almost sitting on the edge of my chair like so what do you do exactly and then you kind of elaborate like you're playing with so like my instant thing that I want to ask to the extent you can say right because it's a a public show nos what was the last thing you played with >> sorry a lot of network issues here but
um Well, I mean, if you think about what is going to happen next, well, certainly one of the things that's happening and has happened over the last month is a sudden and dramatic improvement in computer use agents. So, for those people who were at my keynote at uh at ECS last year, I showed some early versions of computer use agents that could basically use a computer. That's what they do. That's why they call them that. But those things have been pretty limited. And one of the things that I even showed I think
uh in the keynote was that date pickers uh will often mess up your computer use agents at least until December and suddenly uh here comes the next crop of the underlying models for these computer use agents and they're much much better uh and so it doesn't get tripped up anymore in the way that it used to and that gives us some really interesting possibilities. I mean, we'll talk about agent optimization and making all of your properties and things work with agents, and you can do that with MCP and
stuff like that, but there are going to be a lot of cases where you just have no control. So, for example, inside of Microsoft, there's a set of internal sites where I have no control over this, right? We have our expense management site where I put my receipts and things, and that's like a homegrown thing, and it's not wonderful. We have our travel site, which is semi homegrown, also not wonderful. And there's really nothing I can do about those. I mean, I can't agent optimize those things myself. I'm
not in those groups. I can only do it by influence. And I have been working with our expenses team actually to agentify that. But now I have another option which is I have these new computer use agents which can 100% defeat those crappy sites. Uh so that does change the game. >> So is that does that work on any site? because I can imagine you know we're talking about enterprise and often times when when what you see is when you talk with companies uh who have existed for a longer period of time is that especially
for the their own homegrown tools these things run on anything and everything and they have seen technology throughout 20 years or more. So do these new agents work literally on everything on the web? uh everything that I've tried so far. Uh so I'm sure there's going to be some some long tale of things that they can't figure out or whatever. >> There's also something else happening which is interesting which is there is the phenomenon of nudging that is starting to show up on the scene here
and I noticed it actually over the weekend I was using claude code a lot. I was uh you know rebuilding the next version of my questionnaire agent which I often show and I think I probably did show an earlier version of that at uh at ECS. The version that I have now is much much more advanced than before. Um but cloud code has become nudgeable uh interestingly which is to say that you know it used to be that you if you type something in to cloud code while it's working it'll kind of queue it up and
then eventually get to it. Now, it does ceue it up, but only for a moment, and then it takes it into account mid-stream. And that's also true of Opel, the computer use agent that I'm referring to, which by the way is already public, although it's not publicly accessible, but because the Opel team blogged about it, I can speak about it quite openly. >> Um, Opel is nudgeible. And that's interesting because, uh, it did happen once that, uh, I was doing something with my expenses that was actually
pretty unusual and even I couldn't quite figure out how to do it. Uh basically we have these cost centers in Microsoft you know where the expenses get charged to and I have my standard one and in this case it was for Ignite actually the Ignite conference uh I had to use a different cost center. So I had set up my whole expenses I submitted the report and then they sent it back to me saying no no no you use the wrong cost center and I didn't really know how to fix that but I I set Opel to the task and uh I
kind of worked with it. So, it was interesting because it was using the expense site and I could see it, you know, it was using it and then I was kind of nudging it. I was like, um, I don't know, try that button over there. Because, like I said, even I didn't know how to do this. I have never done that before. >> And so, together we kind of figured it out. Uh, and so basically what you're starting to see now is this nudging concept where you can actually modify the path of what an agent is doing
midstream, not just at the end of a turn. And that does really change how you interact with these agents and what kind of output comes out. >> What what are some of some of the benefits that being able to nudge midstream as opposed to wait until it's done with its turn offers you or the users? >> Well, I mean consider for example uh just moments ago actually I was using one of the computer agents to make me some travel reservations. You know before I go to Cologne I actually have
to go to Dubai. I'm speaking in Dubai. Never been there before. And um I was making some travel reservations. Now I need to be in Dubai the morning of April 7th and I didn't off the top of my head really know how long it was going to take me to get from San Francisco to Dubai. Uh and I guess the time difference too, right? Plus 10 hours from here I think. And so anyway, uh and so I had to nudge it a little bit. So while it was going through there, I saw that it was starting to make the reservation on April 6th. Now, I could
have let it go all the way through because April 6 would not get me there on time. Uh, if you consider the flight time plus the time zone difference, I it wouldn't be there on time. And I could see that straight away. Like, it didn't figure that out at first. It would have figured it out eventually because it would have gotten to the point where it was looking at the flight and it's like, "Oh, wait a minute. This flight gets in too late." Um, but instead, I just nudged it. I was watching it and I said,
"Actually, try one day earlier." And so it stopped what it was doing and it kind of went back to the search and it, you know, started that again with uh, you know, April 5th. So it basically allowed me to interact with it in a way that you would interact in a way with a person, right? It's kind of like if I was working with somebody and that person was sitting there making my flight reservations, I might nudge them. And and you can do other kinds of nudges, too. Like, okay, maybe let's see what
happens if I stay a day longer. Is it going to be cheaper? Like, you know, are the flights better? or can I get a direct? Those kinds of things. So, the nudging thing because you can kind of alter what it's doing midstream like that. Uh well, it kind of saves you time and saves agentic effort for lack of a better term. >> That is so cool because like it's exactly as you say. it's to token saved but it's also the fact that like if you know a little bit how LM works right is
that all the tokens that are being used like they end up in the context window so even if you want to re reex your steps they aren't gone gone they are still there so you kind of keep adding things to it which at the end makes it really hard for the LM to understand what is that you exactly want to do right so if you can preempt that that's insane. That's a huge difference. So, is that new evolution? Is that something new that is only applicable to these new new models you mentioned or is that
something that we can expect to see across the board basically in all the models that we already use? >> I think you're going to see it globally. And you know, the nudging thing, it's not uh exactly a model function. It's more a function of the agent framework that sits above uh the model. And when you think about these computer use agents, I mean although there is certainly an underlying model uh beneath them, uh it it really is the agent framework above it, the CUA framework above it
that allows it to do what it's doing. And this is true of well cloud code too, right? flow code. I'm not using it so much as a CUA, but uh but nudgeability is increasingly going to be a feature of I would say every general purpose agent, every agent framework uh and it's going to become part of our lives, especially you know if you look at what is going to happen and even at ECS last year I did go into the voice uh capabilities or the beginnings of it. We kind of explored the voice models and the real-time
models. I made them speak a few different languages and that kind of thing. But that's going to become more and more prominent now uh through the course of 2026 voice is going to become much more of a thing and it will be more and more interactive because today I mean if you turn co-pilot or chatgbt in voice mode it becomes a voice bot. It's nothing but a voice bot you know so you talk to it talks to you and nothing else. >> That's probably not how that's going to go. I mean in reality it's going to be
more kind of collaborative. it'll be you'll be talking to it and so I can nudge it instead with my voice. Now, right now, at least Opel that I'm using right now doesn't have a voice function yet. It probably almost definitely will. And then I will nudge it with my voice while it's working, while it's doing something. And so, you'll see more and more what I'll call voice enabled uh AI applications, not just a voice bot, not just a chatbot. So, that's going to be a
thing I think through the course of 2026. >> Yeah. Sorry, sorry to interrupt you here but uh this is I think this is definitely u you're definitely here spot on uh because we are as a company using lot of mistral model uh for our over internal development they're beautiful in AI foundry uh short short uh sin what I have noticed with devstra uh with and devstra coding tools which is basically kind of compare compared to clothe code is they already work nicely with voice which is Mistral's voice model. So you
can kind of not out of the box not as you have described it Marco but if you put a little development uh and coordination orchestration efforts around it you can actually already do this uh you you need to put some of your own work but it is already it is already kind of there. >> Yeah. Now one of the things that we are working on actually is voice with AI right now is a little bit strange because it forces these turns. That is to say that like you know when you use voice I in whatever tool you were just
discussing I I don't think I'm familiar with it but just in the last couple of days I've used voice both in GitHub copilot and in warp. I'm a big fan of Warp. Um, and I was using Warp's voice mode, >> but it's very transactional. That is to say, I press the little microphone button, I say something, it transcribes it, now it's on the command line, and it goes. Warp is a command line AI, which is super handy for doing stuff like uh administering Azure, right? Warp is
great for that. You can say to warp like, you know, give Addis whatever permissions he needs to access. >> Don't like no. is like not honest, man. Come on. >> Everyone bud, >> but it's very transactional, you know. But that's not really what I want it to be really. What I want it to be is actually more like this discussion that we're having here. I want it to be always on and effectively kind of multi-threaded in the sense that like, you know, I can be talking to you and
you could be doing something else at the same time of like what we're talking about. you could be doing this thing. You know, kind of like when you think about pair programming back in the day, 20 years ago, uh me and a guy named Matt Hoe, we used to do, we were pretty dedicated pair programmers. So, me and Matt, we would literally have two keyboards, two mice connected to the same computer and we would be programming like this all day long for like eight. I mean, it was exhausting. But between the two of us, we were
ridiculously productive. I mean, we did things in a day that would take one developer three weeks. Partly because Matt Hoe himself is an excellent developer, but partly just because we were playing off each other and we would catch pe we would catch each other's errors on the fly and be like, "Oh, no, no, no." Like that could throw a no point or exception or whatever. >> Um, and that's how I want it to be. I don't want it to be transactional. I want it to be uh nudge style where I'm
kind of having this continuous conversation with my coding agent in this case and uh and I could just talk to it and be like, "Yeah, you know, maybe that's not the way to go with that. like let's let's see if we could try a different direction. And this is effectively what I was doing with claude code all weekend except with text. So it doesn't really have a voice function and and so I was just typing to it, nudging it in the direction that I wanted it to go. >> Um but it would have been more fluid if
it were a voice conversation. Now the challenge with that is that right now the way that the voice models work and this is pretty much all of them is that they are all kind of singlethreaded. So, it's like it's made for you're talking and then it's talking and then you're talking and then it's talking and then it does something. Then it does a tool call and then it does something. >> What we want to get to is kind of a dual threaded voice conversation where the voice is sitting in a layer above and
the tool calls are kind of asynchronous and so it can like presumably can talk and do something else at the same time. You know, walk and chew gum at the same time. I'm pretty sure I've seen you do that before, Otis. >> Oh, I could do that. But I guess so that that brings me to an interesting point because the other day I read about like there were quite a few folks on X and I think one of them was the the uh the developer who built cloud code scene CLI and he shared the workflow that he uses at at work and the
way he codes is that he will start >> my friend but that I respect the guy. Did you try what he did? Sorry. What >> did you try? What? What his recipe? >> I literally ran a few instances of VS Code GitHub copet running in parallel. >> How soon did you get stayed out of tokens? >> Oh, not >> and I need to tell you I've got the highest subscription of cloud code max. >> Anyhow, anyhow, not me going to your point. So >> I connected my claude code to claude on
Azure and I have unlimited tokens now. Yes, >> I ran up a bit of a build >> but but I did not run out of tokens. Yes, I also had you know I also would run out of tokens constantly with cloud code like every day >> and now I have unlimited tokens because I get unlimited Azure. But this is a very interesting concept that he was right that he was describing about the I this this is true >> going back going back to my original point. I will I'll really try to make my point. So say we are in this world where
you start four of these things. How would that marker do you think look like with this multi-threaded speech? How would you nudge them if you kind of observe four or you have to observe four or participate in four threads same time? How would that work you think? I mean, that's interesting. And I guess we're we're going to kind of see. I mean, the way that I was spawning multi- aents with cloud code this weekend, well, I used the built-in function of cloud code, which has upsides and
downsides. I mean, uh, the upside is, well, it spawns the agents itself, these kind of background processes that it does. Uh, and they are kind of centrally nudgeable because there's sort of the central orchestrator that's in the middle of it. And that's what I'm watching. The downside is that, yeah, the visibility is lacking here. Now, Scott Hansselman actually last week uh worked up some kind of a console that he could like watch them all go at the same time. Uh I haven't really gotten into
that too much. He showed it to me last week and I was like, damn, dude. Okay. Um but yeah, I mean when you have multi- aents like the multi-agent observability story for cloud code is not great at the moment. Uh for GitHub copilot is a little bit better. I mean, if you are using VS Code insiders, you'll see that it has that kind of multi- aent monitor console now that shows up in there. Still not perfect, but you know, at least you can see them all working, but your question is still valid, which is
how do I choose what to look at, what to nudge, uh, and that sort of thing. And there's only so many cases where you really do spawn these sub aents. I mean, in reality, like when I often spawn them is when I'm uh making it do automated testing. Actually, I'm like, you know what? I'd spawn like six of these things and have them each run different tests. And some of these tests are codified. They're literally unit tests that are checked into the thing. And some of them are ad hoc, you know, for whatever issue
I'm having it fix. Um, and so, especially for the ad hoc ones, I mean, I want them to do that in parallel, like it can, so do it, you know. Uh, so I don't need to nudge it that much. which I don't usually nudge an automated test very much. >> Let's nudge the ECS topic a bit. My dear friend Lis we all know very well. Uh few weeks ago in this very podcast I asked Le so what did you like the most about TCS? And he was like totally straight registration process. It is so fast. You come there
with a QR code and you get the badge. Marco, I hope that's not what you like the most about TCS. I hope I hope you like the other. >> I hope there's something else. >> Yes. >> Well, I mean certainly I enjoyed the I mean certainly it's a large event. I mean what is it 3,000 some people I think that show up for this thing. >> Uh it's a pretty diverse event. I mean you see people from really all over Europe uh at this event and that's kind of interesting because you know I visit
a lot of European countries. roughly once a month I I find myself in in Europe, but when I'm there, you know, when I'm in Netherlands, I'm meeting with Dutch customers. When I'm in Germany, I'm meeting with German customers and so on. Relatively rarely am I at an event where there's this kind of whole bunch of folks from all over Europe. And there's a lot of similarity between uh different people from different countries and what their concerns are, where they're at. But
there are also some major differences that you find, especially when everybody's in the same room. And so I personally find that to be pretty interesting. Uh to be able to do this kind of comparative study of where people are at, what they're thinking, uh what they care about, what they're concerned about. >> That's especially that is really important especially because um and I am German. I live in Germany and I can say it not proudly but I can say that Germany Germans are sometimes very
reluctant uh accepting the new uh new they are more conservative than Dutch. I mean I think uh Valdec I think you and me are going to agree with that even we are if we are neighboring countries >> conservative at all. Yeah. And >> German Germans are actually in that regard. And um I think Mustafa can can confirm on this one because we had a few conversations with our attendees uh the last year and they said, "Yeah, this is awesome. One of the best contests we or the best contests we ever had. However,
there is so much AI that almost every every second or twothirds of the sessions are the AI. Do we need to have so much AI? I honestly envision for this year this to be a known question. So much has changed between my last year and now and especially as we mentioned in preparation for this conversation with Marcos and what is all going to happen even in the next four months. Mhm. >> But I think this basic question is AI just another hype like we had blockchain like we had metaverse like we had name
all all of those I think the the the um realization is settling that it is not it's a fundamental change how we are going to do things in the future we are I mean I was talking to our developers a lot in the past few days that I'm not sure that within one year we will be doing we will still be doing development don't get me wrong and uh but we are going to do it be doing it in a entirely different way that we are doing it even now and even now we are changing it a lot >> right yeah well I mean and it's funny
because if you think back and this is I could say my fourth bubble uh so I've been in Silicon Valley the whole time my entire adult life I've been here in Silicon Valley uh I am coming to you by the way from Oakland California, uh, which is kind of the furthest extent of Silicon Valley, but it does extend here. Uh, lots of lots of Silicon Valley friends live all around me. All my neighbors are like tech people. Um, and I mean, imagine if you were doing this conference in 1998 and people would be like, does
everything have to be about the web? You know, that's exactly what they would have said. And you know that in 2000, in April of 2000, I was at a dotcom boom company and everything crashed. The whole bubble crashed. But the web survived. Everything is web- based now. Everything is internetbased now. I mean, now maybe you access it via your mobile phone, not necessarily just websites and stuff like that, but those core technologies survived. So there was a bubble and yet it persisted. Uh, and that's going to be
true of this too. I mean certainly there are some bubbly aspects to the private market valuations of a lot of these companies and stuff like that. Certainly no doubt about it. And you see some weird blips on the market with the GPU vendors that really have no intellectual property at all. They're like well we have GPUs. you know, I don't know if that's a sustainable business, but uh but what will stay is the way that I mean, the fact is the agents are going to become our portal to the world in the
way that the web became our portal to the world in the late 90s and the early 2000s. Uh and that's going to change how we do things, how we make travel, how we shop for things. It's going to change lots of stuff. Even the last couple of days, if you've been paying attention to the news, uh, a couple of new commerce protocols have shown up on the scene. So that is Google's UCP, uh, Chatbt apps, stuff like that. At the moment, they're kind of fractionated. And so, you know, it's not clear what's going to become a
standard. And I can't wholeheartedly recommend to one of my customers, you you should go make a ChatD app or you should uh support UCP because these things are brand new. But um this is going to be a thing, you know, a year from now when I go to order AAA batteries, I'll probably just tell my phone like, "Order me some AAA batteries." >> And it'll just do it. Does it Does it get it from Amazon or Walmart or Target? I don't care. Does it get Durell or Energizer? I also don't care. I just
want the batteries as long as they don't explode and the price is good. Give me some batteries. And that's what's going to happen. >> A question. It's kind of harder hard to imagine going backwards in any like we are so used to now using all those tools and I completely agree with Marco mayol like are things going to shift more certainly like we we don't even know what's going to happen in May right but but going back to to the old way of doing things I I think that's impossible
and what artists mentioned about the content like people yeah did say a lot about last years like is it all does does it have to be so much AI or wait for them to see the content this year? I don't think there's a single session not mentioning AI in some context or another because it might not be AI focused but it's still like because it's all present everywhere like when we are talk about big part of the big chunk of the content is around security but nowadays it's impossible to talk about security
without looking from the AI aspect of it like what's how it's influencing it like h what are the threats threats are also by AI and so on. So it's kind of like everything is touched by AI some some way, >> right? >> We have we have a lot of development content. We we just mentioned I mean guy who I really who's a buddy of mine uh from Microsoft who uh probably know the name at least if not him Clemeners the guy who is kind of the the legend around messaging and queuing Microsoft. Clemens
wrote a very interesting thing on LinkedIn the other day. Hey founders, your biggest competition isn't another founder. It's a 50 years old guy who grew up with uh 8bit computers in 80s. And he is right. He is so much right. But it does tell us few other things. It's the developer job and we have a lot of developer audience. Uh developer shop is going to change. It's not going to go away. I don't think it's I don't think it's going away because in order to make good software, you need to know what
you're doing. >> Uh however, this friction of learning every single new framework, every single new pattern, every single that's gone and probably in few years I was discussing with our also common friend Dame Dobage few days ago. The question is if we are going to have programming languages as we know them today in three or four years what's what's the purpose of the programming languages uh as such however you will still need to know how to create software what's important for the
software what's important for the architecture but the role of the developers will be more moving from coders to understanders and orchestrators if I may say Yeah. Well, I mean, consider what I've been doing this weekend. As I mentioned, I spent a lot of this weekend uh working on this project because I have this this side project in which I am automating part of my own job. You know, my my questionnaire agent for analyst, questionnaires, RFPs, things like that, and it's become increasingly robust.
Now, in this version, I can feed it an entire spreadsheet and most of the time it will just fill out the whole thing. And I don't really even have to edit anything anymore now. like it's it as I have been improving it, it gets better and better and it's got this whole maker checker pattern in it. Uh and so it it kind of goes back and forth and fixes itself before I even go to edit it. It has lots of these kinds of things and I'm making it better and better in the presence of more and more complicated
kinds of questionnaires and structures and and that sort of thing. So I was making a bunch of improvements to it over the weekend and what did I do? I was really doing what we call PRD first development or structured vibe coding. Uh because I wasn't just vibe coding it. I actually spent a whole bunch of time writing a big PRD and I used spec kit for it. If you haven't used spec kit, it's freaking great for that. You know, >> specs driven development. >> Spec driven development, but
specifically spec kit, which is an open source tool that's available on on GitHub, >> helps you write it because it lobbs questions at you. So, you know, you you start specking the thing out and then it starts lobbing things at you that you didn't think of where it'll be like, "So, what do you want me to do if the file is password protected? What do you want me to do?" You know, all these different kinds of things >> that you kind of have to answer that clarify what your spec is that clarify
kind of the edge cases. So, it tries to think those through. But this is key uh is that really I mean to your point, do coding languages exist in the future? I mean, I guess they probably do. I think or but you could look at them in a sense as a as sort of a bite code like in the same way like you know if you're writing Java or Python it gets translated to bite code and then >> that's what gets run that intermediate thing is what gets run. >> Um so you could look at it as the spec is
the code actually and whatever gets generated from that is the bite code. I mean, you probably have never explored the actual Java bite code that the Java VM runs >> every day at 3M >> a little bit. I mean, I know how to read it, but uh but anyway, yeah, I mean, you you look at Java, you don't look at Java bite code or C#, not net bite code >> here, too, you know. I mean, I wrote this whole thing, you know, thousands of lines of code I wrote over the weekend. >> I didn't really look at the code very
much. I was mostly working with the spec >> but then again you are developer in your core and you know how to write that spec you know what needs to come into the spec that's back to Clement's point of uh 50 years old guy who uh grew up in 80s with the 8-bit computers >> sure but I mean this is why I use spec kit also because spec kit will kind of put you in this box where it will make you write a spec that is amendable to consumption by AI and it'll format it all out with user stories. I will
probably show this at ECS, you know, in my keynote uh coming up or, you know, whatever the version is by May. Uh, and I do hope that by May it'll be better integrated. It's actually reasonably well integrated into GitHub Copilot, but I would like to see it better integrated into VS Code in in a larger sense because I mean planning is the new execution. That's the thing. Execution is cheap, you know. I I give this gigantic spec to Claude code and it goes and noodles on it for 17 minutes
and I go get myself a cup of coffee. I come back and it's, you know, it's done. I'll mudge it a little bit and it'll do what I want it to do. So execution becomes cheap and what you want it to do then is execute correctly and that means planning. So and there's a number of different aspects of that. I mean there's the upfront planning aspect to it. There's kind of the rules architecture that you have to give it. This is one thing by the way that Warp does a really good job of. You know, you
can kind of it has a global rule section and you can give it these rules that say whenever I say this do this or like one of the rules I give to warp. Yeah, I don't use warp as much as a coding agent, but sometimes I do because you can do that. Uh like never check in myv file like you know the environment file where you put keys and stuff like that. like all my coding agents I'm like don't check that in you know stupid right but this is a global rule that I never ever want them to do right that should always
be in the git ignore and nothing else so uh rules is a key and now everybody's coalesed on this agents.mmd so there used to be claw md and co-pilot instructions md and now everybody's got their own MD more and more everybody's kind of coalesing on agents.mmd as a global agent with the recognition that I mean even I myself I mean I've mentioned mention three different coding agents that I use literally on the same codebase, right? I'll use odd code, warp and VS codegithubcilot
depending on what I'm doing and you know the situation. So I want them to be able to use the same uh instructions there. So uh that's that's part of it also. Uh and yeah, I mean the separating out the planning loop from the execution. Uh I turned claude code in u opus plan mode. mode. I don't know if you know how it has an opus plan mode where it uses opus for planning but sonnet for execution. >> For execution yeah >> that's clever. I mean >> that is clever. I mean I don't know if
you have noticed I don't know if it's with the other models as well. Uh as I said we are using in our product we are using lot of mistral and mistral has announced new pricing which is really awesome in December for the first time I've seen AI company charging more for input tokens and for output tokens. >> Yeah. Usually it's the other way. >> Usually other planning is actually more work. >> Yeah, >> they're they're really for the first time I've seen somebody charging more
for the input tokens. I'm like, okay, interesting. >> I think that Gypsy Realtime is like that also for the the audio model also charges more for input tokens than output token. So, it's not unheard of. Uh yeah. Well, it also yeah it's it also represents the depth of processing. I mean it's not just the planning. It's also the tokenization on the inbound, right? So basically it has to do that tokenization and before it goes and puts it into the model whereas an outbound I guess it
just has to do decoding. >> God we geeked we geeked out in this uh >> we're getting down in there totally geeked out. Uh but it's not only going to be geeking out in the in we will have lot of session lot of different aspects for it. Mustafa mentioned security. Security today without AI is basically and who would say so even a year ago really even a year ago who would even think that security today without AI is like don't even start doing that because you are >> you are losing in in the beginning in
the start >> when I look at look at it from from a content perspective as a content owner for for ECS in 23 there was maybe one or two sessions on AI right like 24 there was a glimpse of it but not that much. 25 was a boom like 25 it was all over like twothirds of the sessions were around AI. This year I think there's not a single session that doesn't involve AI in some way or another. Maybe not fewer AI sessions but they are everything everything is touched by AI right >> from different from different
perspectives. We are also we cannot neglect the M365 perspective which is a big one. I mean while the it's your team trying to bring those perspectives uh around around the Microsoft side around that we will have also Adam Harmet joining us uh this year which which is an awesome thing to tell us what everything is Microsoft 365 group uh doing u doing uh around co-pilot and how does that how does AI influence work not only of developers and what we are talking about now but also of the people
doing their day work with with technologies like Microsoft 365 like power platform like whatever fabric I mean AI and fabric is the big thing right now uh what is going on so we will have lot lot of conversation about that but of course uh when we speak about AI and cloud summit which are the core topics definitely we'll have a lot about software crafting we'll have a lot about development we'll have a lot of security and we will have a lot of Azure and how how does all that uh work with the
foundry and with with the uh new agents framework in uh and it's really even for me uh who has this as a job it's sometimes really difficult to follow what everything Microsoft's announcing especially around ignite and build it's like it's a race in the time just to pick on even to pick on the new names >> yeah I mean even for me and I work here you know I mean I I yeah I have a challenge with that but you know one of the stories that I think is going emerge now uh certainly with M365 the story
number one is the rise of the general purpose agents and that is M365 copilot is one of them uh Claude of course is another one chat DBT is another one um their increasing ability to do what you need to do I mean as I mentioned I'm writing this questionnaire agent and you know it's pretty good at answering these questionnaires I've been using various iterations of it over the last year and a half and it automates a good part of my job but every time every time I do this and in fact I'm going to do it
again when I finish what I'm working on right now is if I just give this questionnaire directly to Opus45 and I consider open for opus 45 to be the best at this right now does it do it right and if I use that as a baseline and I have done this with co-pilot as well so I'll give it a co-pilot in GBD5 mode now GB51 or 52 uh does it do it right do I even need to write something to do this or will the general purpose agent be able to do it for for me. But the other piece of it is delegation. So
increasingly we're going to see delegation. One of the things that we that we launched at Ignite are the IQ's. Foundry IQ, fabric IQ, work IQ. So there are three IQs which the marketing team kind of came up with at the last minute. So we all had to scramble like in the last week before Ignite. Everybody's like, "Oh, we're changing the names to IQ." But >> unheard of Microsoft. Unheard of. Unheard of Microsoft. Sorry, mama. It's you. >> It happens everywhere. It happens
everywhere. But there is a common thread between the IQ's. It was a little bit accidental, but each one of the IQ's represents a form of delegation. You think about, so back in 2024, at the end of 2024, I was at Ignite in Chicago and it was the second to last day of Ignite and my laptop died. My laptop died and so I had a dead laptop. That wasn't great. Now, there are things that I know how to do as part of my job, obviously, development and things like that. But one of the things I definitely do not
know how to do is literally order a new laptop from whoever our supplier is. I don't even know who our supplier is. Now, I do know how to go make a Service Now ticket, which is what I had to do. I had to go make a Service Now ticket to go request a new laptop. And by the time I got home, there was a laptop. You know, they FedEx it to me. It was sitting at my my doorstep. Um, but I don't know how to make that order. I delegated that task. I made the Service Now ticket and somebody somewhere picked that up, knows how to
do that task and did it. Now, we're going to see the same thing apply to agents because whether it's a general purpose agent or an agent that you're building yourself, it can't keep everything in its head. It has a fixed context window. Same as I do, right? I also can't keep everything in my head. I don't I don't think I want to know how to order a laptop from our supplier. So uh what these IQs represent is this delegation foundry IQ for unstructured data fabric IQ for structured data work
IQ for data that's in the graph or shareepoint those kinds of things but all of them represent other agents because all three of the IQs are in fact another agent to which your agent is delegating and so if my agent needs to go get data from the data warehouse what I don't want to do is clutter up my agent's context window with the data diction ary and the instructions and the example queries and all this crap that you need to put in there to make it successfully query a structured data
store. No, instead I want to put that all in the fabric data agent in fabric IQ and I want my agent to just call that one and say if you need data call that use natural language you don't need to know the scheme and all that stuff it'll figure it out and it'll spit it back at you. That's delegation and that becomes uh increasingly important now uh as these agents start to do more and more complicated jobs. >> Wow. This is it's going to be interesting interesting time for us. One last question from my
side Marco. Um, one thing and I had this I had this discussion with I think with Vessa uh and Vessa was like I noticed your face got a bit more red than usually with me but people lot of people didn't know that our connection or cable between two main rooms in the ECS 2025 was working the evening before somebody did a full res reset then it wasn't working again and it wasn't working until five minutes before Marcos for my opening my Martinez and my opening in Marco's keynote. Uh, imagine my emotional state standing
on that stage not knowing are we starting in time and how are we starting and then I look at Marco. Pure calmness. He knows what's going on. No face expression change. Everything is fine. How do we do that? >> I do have a calm disposition and because I do stage events a lot of times there's always some last minute issue and I usually come prepared for everything. You know, I was doing a keynote uh a few months ago in Nairobi and oh my god, those people were a mess. Fortunately, so they didn't even know they didn't
even know like we had our little uh confidence monitor. You know, some people before me needed to do slides. I don't really do slides, but >> they needed to do slides. They wanted their notes on the screen. And this is fairly standard conference fair. Um and these folks in Nairobi could not figure out how to do this. But fortunately, I have two different HDMI adapters. my laptop supports three screens and I was like I will do it five minutes before the session actually began, >> you know. So that kind of thing happens
to me a lot and I'm always able to work through it I guess and I always have been able to work through it. So I guess it's just it's a combination of just yes I do kind of have a calm demeanor but also uh a lot of experience with this kind of thing where it just doesn't bother me anymore. We we had to split the keynote in this floor because the the main keynote room takes only 1,400 people and we could not fit and that's the reason why I did the splitting. I promise you for Cologne the main keynote
room can uh can take all of us. So we will have the room for 3,000 people and we will not have we will not uh be forced to do this back and forth as uh as we had in if you remember in diesel do I'm >> to make it worth their while. That's for sure. You know, I mean, like I said, it's hard to predict, even for me, it's hard to predict where we're going to be in May. Although, a lot of these things that we've been talking about here, they certainly will still apply in May. This
none of these things, the planning stuff and the idea of delegation, these things will still be true, but we'll have new tools at our disposal four months from now that may not exist today. I don't I'm really um happy to call my friends and to work closely with uh with Microsoft cloud advocates both on uh AI side with the team that led team on Microsoft 365 side you guys are doing often uh you are doing an awesome job and with what Daniel and April Dunham team is doing on the power platform side
and and the stuff that I see you guys are doing around this is this actually awesome. I want to see all those sessions that are coming. >> Unfortunately, we will you will see none. >> Yes, I unfortunately I'm going to see none. I will share a glass or two of wine with you guys in the evenings. That's fine. But probably probably we have most of we are going to see none of those sessions. Uh later around trying to to to fix things. >> Yeah. >> On the fly. >> As long as people are happy, we are
good. >> Yes. That's why we did. Marco, thank you a lot and I'm looking forward uh to sharing a glass of beverage with you in May. And um Valdec Mustafa, thank you for for no another awesome episode. And uh yeah, >> we will be back. >> We will definitely be back >> on the road to ECS. >> On the road to ECS and we will be back uh together in the latest uh latest in Cologne. And thanks for everybody. Thank you everybody for taking their time to listen to us, to
watch us and um see you all in Cologne. >> See you.