WEBVTT

00:00.000 --> 00:10.200
Welcome, now we are going to talk about some interesting stuff about the job

00:10.200 --> 00:15.200
profilers, how we are going to advance them, what they are, what we are now and so on.

00:15.200 --> 00:17.200
So, let me start with this.

00:17.200 --> 00:22.200
So, you know the situation when you come to party or meet new people and somebody just goes

00:22.200 --> 00:26.200
like, what do you do for living here or stuff?

00:26.200 --> 00:30.200
Well, I do job profiling a data doc.

00:30.200 --> 00:33.200
So, what does it mean?

00:33.200 --> 00:38.560
Well, you're on us, it's a tooling to collect data, explain data points and provide with all the

00:38.560 --> 00:44.560
insights and, well, if you talk to me, I don't know, I forgot what you talked about.

00:44.560 --> 00:46.200
So, what are you really doing?

00:46.200 --> 00:51.200
Well, I just, and kind of performance crime detective, you know, it's like,

00:51.200 --> 00:55.200
are you shooting software developers?

00:55.200 --> 00:56.200
I wish.

00:56.200 --> 00:57.200
No.

00:57.200 --> 01:01.200
I'm working on tools which provide answers to questions like, what is working

01:01.200 --> 01:02.200
to CPU?

01:02.200 --> 01:06.200
Why is your potgettic on-killed?

01:06.200 --> 01:09.200
Why, why your service is time timing out?

01:09.200 --> 01:12.200
Why the latest is packing in so on?

01:12.200 --> 01:14.200
And for this?

01:14.200 --> 01:17.200
Do you, well, I have tools to talk to.

01:18.200 --> 01:22.200
Yeah, well, as I mentioned in the beginning, we're going to talk about profiles, right?

01:22.200 --> 01:27.200
So, we have the toolbox for Java and there's something we just called Java file

01:27.200 --> 01:29.200
recorder, right?

01:29.200 --> 01:35.200
We're just coming Java and async profiler and something called OELTIL

01:35.200 --> 01:40.200
EBBF profiler, maybe you know about it, but not we're going to mention today.

01:40.200 --> 01:43.200
So, there's a bunch of tools.

01:43.200 --> 01:46.200
So, um, and now you probably give us an opportunity?

01:46.200 --> 01:47.200
Yeah, I can do that.

01:47.200 --> 01:50.200
So, what's this JDK slide record?

01:50.200 --> 01:54.200
Are you heard you've done something on this 30 years ago?

01:54.200 --> 01:58.200
Well, I just helping out, but here we have Eric Galling from the JFR team.

01:58.200 --> 02:00.200
You could talk much more about this.

02:00.200 --> 02:03.200
So, he's in the right team.

02:03.200 --> 02:05.200
So, I would just helping out with some stuff.

02:05.200 --> 02:08.200
But it's a really neat tool inside JGK.

02:08.200 --> 02:11.200
It's a lightweight, not only profiles, it's like big,

02:11.200 --> 02:18.200
observability tool, which can give you insights into how your JVM is behaving

02:18.200 --> 02:20.200
also your application.

02:20.200 --> 02:22.200
It has very minimal overhead.

02:22.200 --> 02:26.200
It's extremely stable, extensible, and okay, let me repeat it.

02:26.200 --> 02:28.200
It's built in JDK.

02:28.200 --> 02:29.200
So, it's always there.

02:29.200 --> 02:30.200
It sounds very good.

02:30.200 --> 02:32.200
Can you explain to me how it works?

02:32.200 --> 02:34.200
Where we get execution samples from it?

02:34.200 --> 02:39.200
Yeah, well, the big execution sample is kind of CPU profile.

02:39.200 --> 02:42.200
Well, it's approximation of the CPU profiling.

02:42.200 --> 02:45.200
And there is this algorithm behind that.

02:45.200 --> 02:48.200
So, if you imagine we have on the right side, there is a thread list.

02:48.200 --> 02:50.200
We have in Java and JDM.

02:50.200 --> 02:54.200
So, JFR sample goes in, and each and milliseconds.

02:54.200 --> 02:57.200
By default, we have like 10 milliseconds of profiling or more.

02:57.200 --> 03:02.200
It scans the thread list, and picks next, non-blocked JavaScript.

03:02.200 --> 03:04.200
That means the thread which is not worked.

03:04.200 --> 03:05.200
You just kind of running.

03:05.200 --> 03:09.200
Then it suspends the thread, and that takes the stack trace, and generates the sample.

03:09.200 --> 03:11.200
Then it goes back, picks.

03:11.200 --> 03:15.200
Next one, non-blocked, JavaScript does the same thing.

03:15.200 --> 03:23.200
But sometimes it can happen that the stack, the acquiring the stack trace will result in error.

03:23.200 --> 03:26.200
In which case, the JFR sample I currently would just skip it.

03:26.200 --> 03:28.200
Okay, I don't know what to do with this.

03:28.200 --> 03:30.200
So, it will just skip it.

03:30.200 --> 03:34.200
And we'll repeat this until it fills up like five sample slots.

03:34.200 --> 03:35.200
Right?

03:35.200 --> 03:38.200
So, this is happening each 10 milliseconds.

03:38.200 --> 03:39.200
I mean, that's fine.

03:39.200 --> 03:45.200
For most of the users, it will give you pretty good view of what your CPU is doing.

03:45.200 --> 03:49.200
But this seems to have a problem, for example, because we're using the sliding window.

03:49.200 --> 03:52.200
We're getting only like five stack traces.

03:52.200 --> 03:55.200
Every at least you know seconds.

03:55.200 --> 03:59.200
So, that's a problem, because for me, it now blocks like the effective sample period of every set.

03:59.200 --> 04:05.200
The process with the number of threads and you can clearly see that the effective sampling rates

04:05.200 --> 04:09.200
scales linearly with the set count and the stack blocking error rate.

04:09.200 --> 04:14.200
So, that's pretty bad, because now we don't have any clear relationship between CPUs and samples.

04:14.200 --> 04:19.200
Between CPU time and samples, and that's not, not, not print, isn't it?

04:19.200 --> 04:23.200
But I use the code of our profiles.

04:23.200 --> 04:27.200
I'm not looking at you specific profile.

04:28.200 --> 04:32.200
There were profiles that had some safe point bias.

04:32.200 --> 04:37.200
So, essentially, that meant that, so we safe point essentially means that

04:37.200 --> 04:40.200
their points in a program where you can reason about a program.

04:40.200 --> 04:43.200
For example, that he has a specified state.

04:43.200 --> 04:46.200
This is the point where we do, for example, garbage collection.

04:46.200 --> 04:50.200
And do we have still safe point bias?

04:50.200 --> 04:51.200
No, no, no.

04:51.200 --> 04:52.200
No, we don't have.

04:52.200 --> 04:53.200
No, no.

04:53.200 --> 04:54.200
No, no.

04:54.200 --> 04:58.200
In JFR, when JFR is not stopping our JFR sample for the execution sample,

04:58.200 --> 05:00.200
is not stopping at safe points.

05:00.200 --> 05:03.200
So, it will just land at any place where it happens, right?

05:03.200 --> 05:05.200
But this is the fortress?

05:05.200 --> 05:06.200
Well, yeah.

05:06.200 --> 05:10.200
But there is this thing called symbolication, where we actually,

05:10.200 --> 05:16.200
we need to get the human readable form of the frames,

05:16.200 --> 05:19.200
like in the stack trays, we have frames, the frames are basically addresses,

05:19.200 --> 05:22.200
and we need to somehow figure out what the hell is that?

05:22.200 --> 05:25.200
What is the name of the method and line number?

05:25.200 --> 05:28.200
And this turns out to be slightly surprising,

05:28.200 --> 05:31.200
if you are not very familiar with things.

05:31.200 --> 05:34.200
So, how we do that?

05:34.200 --> 05:37.200
We ask JVM to give us the information.

05:37.200 --> 05:40.200
For the interpreted code, this is pretty easy.

05:40.200 --> 05:41.200
Let's say easy.

05:41.200 --> 05:43.200
The JVM does all the head lifting.

05:43.200 --> 05:46.200
We will get the full information for the interpreted frame.

05:46.200 --> 05:47.200
That's fine.

05:47.200 --> 05:51.200
But for the jit compiled code, the jit compiler, it's a big beast, right?

05:51.200 --> 05:54.200
So, it will just transform the code over the place,

05:54.200 --> 05:57.200
and it might get difficult to actually get back to your source code.

05:57.200 --> 06:01.200
So, what it does, it inserts something which is called the debugging form,

06:01.200 --> 06:06.200
in places, and the symbolication part is using this debugging form.

06:06.200 --> 06:11.200
So, that means like you see, we have a bunch of, this is, this is assembly, by the way,

06:11.200 --> 06:13.200
of the jit assembly, right?

06:13.200 --> 06:16.200
And we have a bunch of instructions, blah, blah, blah, blah, blah,

06:16.200 --> 06:19.200
and we land the sampler lines at a particular instruction,

06:19.200 --> 06:21.200
and then we need to figure out the debugging form.

06:21.200 --> 06:24.200
So, we need to do this, we need to jump over all the instructions,

06:24.200 --> 06:27.200
not that we hit the debugging form, then we will say, okay.

06:27.200 --> 06:30.200
So, this texture is mistaken at this particular place.

06:30.200 --> 06:33.200
Here, it's like, line number 17.

06:33.200 --> 06:35.200
This is the default.

06:35.200 --> 06:41.200
And if you, I'm pretty sure some people from the jit teams are familiar with the jit.

06:41.200 --> 06:44.200
So, this is one of the places when you look at the code.

06:44.200 --> 06:48.200
One of the places where jit actually is placing the safe points.

06:48.200 --> 06:52.200
So, the one loop goes back, there is a safe point, usually.

06:52.200 --> 06:56.200
Well, the jit people could talk more about this, this would be like another one,

06:56.200 --> 06:58.200
another talk, at least.

06:58.200 --> 07:01.200
So, we are getting debugging for only a set points.

07:01.200 --> 07:05.200
So, that means that even though the sampler is not safe point biased,

07:05.200 --> 07:09.200
the symbolication will actually go biased back to the safe points.

07:09.200 --> 07:11.200
So, that's like, yeah?

07:11.200 --> 07:13.200
But can we do something about it?

07:13.200 --> 07:14.200
Yeah.

07:14.200 --> 07:15.200
There are people at jvm.

07:15.200 --> 07:19.200
They came up with the jvm argument, debugging on safe points,

07:19.200 --> 07:20.200
which you can provide.

07:20.200 --> 07:23.200
And when you do that, suddenly boom,

07:23.200 --> 07:26.200
you have a bunch of debugging force, like,

07:26.200 --> 07:28.200
sprinkled around the samblic code.

07:28.200 --> 07:30.200
And this makes it much more easier for the profile

07:30.200 --> 07:32.200
to reason about the frames were blended.

07:32.200 --> 07:37.200
So, yep, just one jump, just one jump away from the happiness.

07:37.200 --> 07:39.200
And you have the frame there.

07:39.200 --> 07:41.200
And you see, instead of line 17,

07:41.200 --> 07:44.200
we just do a loop package, we are in the line 16,

07:44.200 --> 07:47.200
which is actually the code, which was executing at the time.

07:47.200 --> 07:49.200
And if you are looking for hotspots,

07:49.200 --> 07:53.200
like moving to the safe point can be quite misleading.

07:53.200 --> 07:55.200
Like, you could be trying to optimize starting,

07:55.200 --> 07:58.200
which is not really the hot code, right?

07:58.200 --> 08:02.200
So, this are the jvm arguments,

08:02.200 --> 08:05.200
which you should provide when you are trying to sample,

08:05.200 --> 08:08.200
which are trying to profile the Java application to get,

08:08.200 --> 08:10.200
like, the most precise results.

08:10.200 --> 08:13.200
Good thing is, like, just to small note,

08:13.200 --> 08:16.200
there are, there are profiling tools, like, yes,

08:16.200 --> 08:18.200
as in profiling underpugging city here,

08:18.200 --> 08:21.200
which are setting these flags by default.

08:21.200 --> 08:23.200
So, when you are using as in profiling,

08:23.200 --> 08:25.200
we will talk about setting profiling bit later,

08:25.200 --> 08:28.200
but that will just send this flag by default,

08:28.200 --> 08:32.200
and you will get really, like, fine detail data.

08:32.200 --> 08:35.200
To summarize, jdk file recorder,

08:35.200 --> 08:37.200
amazing tool is a big question,

08:37.200 --> 08:40.200
everywhere in the jdk is from Java 8.

08:40.200 --> 08:42.200
Up, it's stable, safe,

08:42.200 --> 08:44.200
it's not going to crash your application.

08:44.200 --> 08:46.200
It's fully supported.

08:46.200 --> 08:49.200
Like, open jdk or a code, it's like, yeah,

08:49.200 --> 08:51.200
it's something very standard.

08:51.200 --> 08:55.200
Small downsides, what we find is that

08:55.200 --> 08:57.200
there is no real CPU sampler,

08:57.200 --> 08:59.200
that's the thing I was talking about.

08:59.200 --> 09:04.200
This approximation might be sometimes not good enough

09:04.200 --> 09:07.200
for the precision you want to get.

09:07.200 --> 09:09.200
It fails silently for errors,

09:09.200 --> 09:10.200
so that just jumps over.

09:10.200 --> 09:13.200
So, the static statistics can be slightly skewed,

09:13.200 --> 09:16.200
and currently there are no, like,

09:16.200 --> 09:18.200
native frames in the spectracy.

09:18.200 --> 09:19.200
So, we don't have mixed spectracy,

09:19.200 --> 09:21.200
jdk is 9 plus native.

09:21.200 --> 09:24.200
It's like, it's not big,

09:24.200 --> 09:27.200
it's like something, yeah, it would be nice,

09:27.200 --> 09:28.200
but.

09:28.200 --> 09:32.200
So, second thing, right?

09:33.200 --> 09:35.200
See, it's async profile,

09:35.200 --> 09:36.200
looking back.

09:36.200 --> 09:38.200
So, this is amazing tool,

09:38.200 --> 09:40.200
based on something,

09:40.200 --> 09:42.200
which is called asyncetcultures,

09:42.200 --> 09:44.200
which can many, many years ago,

09:44.200 --> 09:46.200
for prehistoric performance,

09:46.200 --> 09:48.200
to some performance studio,

09:48.200 --> 09:49.200
whatever.

09:49.200 --> 09:52.200
And async profile actually made

09:52.200 --> 09:54.200
really great use of it.

09:54.200 --> 09:58.200
And unlike the jfr CPU sampler,

09:58.200 --> 10:00.200
async profile is not using,

10:01.200 --> 10:02.200
like, dedicated thread,

10:02.200 --> 10:04.200
and going through the threads,

10:04.200 --> 10:06.200
but it, it will use

10:06.200 --> 10:08.200
kernels, scheduler support,

10:08.200 --> 10:10.200
to actually blood the kernels,

10:10.200 --> 10:12.200
send the signal every time,

10:12.200 --> 10:14.200
a thread has consumed,

10:14.200 --> 10:16.200
a certain number of milliseconds

10:16.200 --> 10:17.200
of the CPU time.

10:17.200 --> 10:19.200
It will send signal.

10:19.200 --> 10:21.200
The signal we will process,

10:21.200 --> 10:22.200
the signal handler,

10:22.200 --> 10:23.200
the thread will be interrupted,

10:23.200 --> 10:25.200
and then the adesync profile,

10:25.200 --> 10:26.200
which is, well, the stack,

10:26.200 --> 10:28.200
it will combine the asyncetcultures

10:28.200 --> 10:30.200
and the native stack working,

10:30.200 --> 10:31.200
to get you, like,

10:31.200 --> 10:33.200
combined stack trace of the place.

10:33.200 --> 10:35.200
That's easy.

10:35.200 --> 10:37.200
Easier said than done.

10:37.200 --> 10:39.200
But, yes, it's working perfectly.

10:39.200 --> 10:43.200
The number of samples you are getting

10:43.200 --> 10:47.200
is actually proportional to the CPU activity of the threads.

10:47.200 --> 10:50.200
So, you can reason about the CPU activity of the threads,

10:50.200 --> 10:51.200
based on the number of all,

10:51.200 --> 10:53.200
on the proportions of the samples collected.

10:53.200 --> 10:55.200
Right only works on the lyrics,

10:55.200 --> 10:56.200
but, yeah.

10:56.200 --> 10:57.200
Yeah.

10:57.200 --> 10:59.200
I include, like, trick prints,

10:59.200 --> 11:00.200
something for winners,

11:00.200 --> 11:01.200
but it was told it was dark,

11:01.200 --> 11:02.200
and I had to,

11:02.200 --> 11:03.200
no.

11:03.200 --> 11:05.200
You shouldn't do it at home, kids,

11:05.200 --> 11:08.200
but it's usually only works on the links in that.

11:08.200 --> 11:09.200
No.

11:09.200 --> 11:10.200
Yeah.

11:10.200 --> 11:11.200
So, yeah.

11:11.200 --> 11:12.200
This is how you use,

11:12.200 --> 11:14.200
if you want to go and get your hands dirty

11:14.200 --> 11:15.200
with the asyncetcultures,

11:15.200 --> 11:17.200
this is how you use asyncetcultures.

11:17.200 --> 11:19.200
You need to, you need to resolve the symbol,

11:19.200 --> 11:21.200
then you can use it.

11:21.200 --> 11:23.200
Well, of course, it's unofficial.

11:23.200 --> 11:26.200
So, whatever happens is like, yeah, hands off,

11:26.200 --> 11:27.200
if you're a problem,

11:27.200 --> 11:31.200
but I'm not saying, like, most of the time it works.

11:31.200 --> 11:33.200
Sometimes it might crash.

11:33.200 --> 11:36.200
100% is struggling, I'll need to pick

11:36.200 --> 11:38.200
the most of the breaks, doesn't it?

11:38.200 --> 11:41.200
They're my crushers, right?

11:41.200 --> 11:43.200
There you are.

11:43.200 --> 11:44.200
And the funny thing is,

11:44.200 --> 11:45.200
there was,

11:45.200 --> 11:47.200
it's a key, one test case in the open sheet,

11:47.200 --> 11:49.200
and it's the only one in the end

11:49.200 --> 11:51.200
for, like, many years,

11:51.200 --> 11:54.200
but it's really an unofficial.

11:54.200 --> 11:56.200
Yeah.

11:56.200 --> 11:57.200
So, that's why, like,

11:57.200 --> 11:59.200
asyncetcultures is trying really hard

11:59.200 --> 12:02.200
to get to slightly different base.

12:02.200 --> 12:03.200
So, there is this thing,

12:03.200 --> 12:05.200
which was interesting, I don't know exactly.

12:05.200 --> 12:08.200
It's like, half year or something.

12:08.200 --> 12:11.200
It's being struck based stagwalker.

12:11.200 --> 12:13.200
It's like handwritten stagwalker,

12:13.200 --> 12:15.200
based on something called VMStruct.

12:15.200 --> 12:18.200
And VMStruct,

12:18.200 --> 12:23.200
which allows you, basically, to crack open JVM materials, if you know how.

12:23.200 --> 12:25.200
Again, it's unofficial, not public.

12:25.200 --> 12:28.200
So, you need to do all this dance with reading symbols,

12:28.200 --> 12:32.200
and you actually need to go and read the,

12:32.200 --> 12:34.200
the VMStruct CPP.

12:34.200 --> 12:35.200
See, down there?

12:35.200 --> 12:37.200
So, this is just an example.

12:37.200 --> 12:38.200
Like, the file is huge.

12:38.200 --> 12:40.200
You can go to OpenJDK source code.

12:40.200 --> 12:43.200
You can read the VMStruct CPP.

12:43.200 --> 12:47.200
And then, you can read the VMStruct CPP.

12:47.200 --> 12:51.200
And then, you get, like, glimpse of what everything is there.

12:51.200 --> 12:54.200
It gives you access to, like, almost everything.

12:54.200 --> 12:56.200
It was edited for serviceability agent.

12:56.200 --> 12:58.200
So, it's pretty powerful.

12:58.200 --> 13:00.200
But, yeah, again, unofficial.

13:00.200 --> 13:01.200
There is no API.

13:01.200 --> 13:04.200
This is, this is on your own.

13:04.200 --> 13:06.200
Um,

13:06.200 --> 13:10.200
summarizing profile are really cool to, like,

13:10.200 --> 13:14.200
get, it's doing the best what it can with the,

13:14.200 --> 13:16.200
like, backing technology that means,

13:16.200 --> 13:20.200
as in profile, I think it call trace, a VMStruct.

13:20.200 --> 13:23.200
It provides the mixed tech traces.

13:23.200 --> 13:25.200
And I must say, like, it's fairly stable.

13:25.200 --> 13:28.200
Like, even with the, I think, it call trace, some issues.

13:28.200 --> 13:32.200
With the VMStruct, it's, it's pretty, yeah.

13:32.200 --> 13:34.200
I would say, it's not crashing.

13:34.200 --> 13:35.200
Wow. Yes.

13:35.200 --> 13:36.200
I think it's not crashing.

13:36.200 --> 13:38.200
And it almost, and almost everybody,

13:38.200 --> 13:41.200
for us in profile, and it's new functionality.

13:41.200 --> 13:45.200
So, it's open source, it's fine.

13:45.200 --> 13:49.200
And so, and it's the basis of many external levels,

13:49.200 --> 13:52.200
many external profiling tools, or all your application,

13:52.200 --> 13:55.200
performance, one just, probably use us in profile,

13:55.200 --> 13:57.200
or photograph it, another one.

13:57.200 --> 13:58.200
Yeah.

13:58.200 --> 14:00.200
I'm so relies on chipmool intelligence, which is.

14:00.200 --> 14:01.200
Yeah.

14:01.200 --> 14:03.200
That's, that's the thing, like,

14:03.200 --> 14:06.200
since all those things are unofficial.

14:06.200 --> 14:10.200
Um, in fact, there are no guarantees, right?

14:10.200 --> 14:13.200
Because, like, basically, you are relying on internal

14:13.200 --> 14:15.200
implementation details.

14:15.200 --> 14:18.200
So, they, they're, they're cannot be any guarantees.

14:18.200 --> 14:21.200
So, that might be improved, I hope.

14:21.200 --> 14:23.200
Somehow.

14:23.200 --> 14:24.200
Oh.

14:24.200 --> 14:25.200
And yeah.

14:25.200 --> 14:28.200
So, this, this is our tutorials,

14:28.200 --> 14:31.200
which I've been using almost daily for,

14:31.200 --> 14:33.200
last, I don't know, how many years.

14:33.200 --> 14:37.200
Uh, there's this, the, the E, E, E, B, B, F.

14:37.200 --> 14:39.200
I think you, you did something with it, right?

14:39.200 --> 14:41.200
So, if you see me, like, running out,

14:41.200 --> 14:44.200
I'm running to the E, B, B, F, different, doing data.

14:44.200 --> 14:48.200
But anyway, so, um, there's also, uh, the open technology

14:48.200 --> 14:49.200
E, B, B, F profile.

14:49.200 --> 14:51.200
It's essentially runtime, I'm not,

14:51.200 --> 14:53.200
it's exempling, written in E, B, F.

14:53.200 --> 14:55.200
But what is, uh, and it's, like,

14:55.200 --> 14:56.200
from the CNC information.

14:56.200 --> 14:58.200
So, it's also a stable open source project.

14:58.200 --> 15:00.200
So, for it's many languages,

15:00.200 --> 15:03.200
designed to travel, but also supported to travel with nature.

15:03.200 --> 15:05.200
And, whole works, which is E, B, P, F.

15:05.200 --> 15:08.200
E, B, F is, uh, a technology that makes the lives come

15:08.200 --> 15:11.200
for comical, at native execution speed.

15:11.200 --> 15:13.200
So, essentially, you can extend the literature

15:13.200 --> 15:15.200
and the touch to hooks.

15:15.200 --> 15:16.200
How this works?

15:16.200 --> 15:18.200
Just half a minute on this.

15:18.200 --> 15:20.200
Um, is that you compile your program.

15:20.200 --> 15:22.200
You can write, for example, and see,

15:22.200 --> 15:23.200
compile down to a bike code.

15:23.200 --> 15:24.200
It's a fairly simple bike code.

15:24.200 --> 15:26.200
And that just complex and intricate.

15:26.200 --> 15:28.200
It's a shower once, but pretty simple bike code.

15:28.200 --> 15:30.200
It's a pretty simple register machine.

15:30.200 --> 15:32.200
It's like seven registers and so, um,

15:32.200 --> 15:35.200
and no rules are recursion.

15:36.200 --> 15:39.200
Take my not really looks, but, but, no recursion in here.

15:39.200 --> 15:42.200
Um, what you don't do, you load just into the load scale,

15:42.200 --> 15:45.200
because this program runs in the linear scale.

15:45.200 --> 15:47.200
And we have a verified, like, the verified,

15:47.200 --> 15:49.200
that you don't also have a bike code,

15:49.200 --> 15:50.200
verify it in, like, the object,

15:50.200 --> 15:52.200
or basically prevents you from having

15:52.200 --> 15:54.200
sit in teaching files in the linear scale,

15:54.200 --> 15:55.200
which is pretty bad.

15:55.200 --> 15:57.200
Uh, typically, because of pressure system.

15:57.200 --> 15:59.200
And then, you know, if the chip compiler,

15:59.200 --> 16:02.200
uh, the JVM people in node and chip copars are really fun.

16:02.200 --> 16:04.200
It's like, on all the major platforms, like,

16:04.200 --> 16:08.200
you see, in the 390, um, and it really takes 862.

16:08.200 --> 16:10.200
Um, do you know, the teacher's program

16:10.200 --> 16:12.200
chooses some calls to Thomas and more.

16:12.200 --> 16:14.200
And you can essentially do the same thing,

16:14.200 --> 16:16.200
as you do with us in profile.

16:16.200 --> 16:19.200
Do also have to secure Thomas and anything.

16:19.200 --> 16:21.200
And events, but the cool thing is,

16:21.200 --> 16:22.200
it runs in the linear scale.

16:22.200 --> 16:24.200
So you have system-bindness ability.

16:24.200 --> 16:27.200
Um, and also, it's far faster,

16:27.200 --> 16:30.200
because you don't have to jump between

16:30.200 --> 16:33.200
the profile and use the, like, all the time of the signal.

16:33.200 --> 16:36.200
And in the end, it just uses VM strength,

16:36.200 --> 16:39.200
which is essentially asking profile running in linear scale,

16:39.200 --> 16:41.200
which is kind of cool.

16:41.200 --> 16:43.200
So it has a few advantages,

16:43.200 --> 16:47.200
essentially the same as us in profile is just a little bit more stable

16:47.200 --> 16:49.200
it's backed by the scenes here,

16:49.200 --> 16:51.200
which is a slow larger, um,

16:51.200 --> 16:54.200
and it's a standard as product all in history cool,

16:54.200 --> 16:56.200
but it's still relies on JVM and also

16:56.200 --> 16:59.200
we have the same problem as we're passing profile.

16:59.200 --> 17:02.200
And I talk with people that we're working on,

17:02.200 --> 17:05.200
um, hotel or hotel travel profile.

17:05.200 --> 17:06.200
And while, like, yeah,

17:06.200 --> 17:10.200
it's interesting that we rely so much

17:10.200 --> 17:11.200
on trying to travel.

17:11.200 --> 17:13.200
Or, and they're slightly worried.

17:13.200 --> 17:16.200
But anyways, um, so, um, yeah.

17:16.200 --> 17:17.200
Yeah.

17:17.200 --> 17:19.200
I think we just went through, like,

17:19.200 --> 17:24.200
the most used profiling tools nowadays in JVM.

17:24.200 --> 17:26.200
And I would like to talk,

17:26.200 --> 17:29.200
or we would like to talk a bit more about how we would

17:29.200 --> 17:31.200
prefer advancing this.

17:31.200 --> 17:33.200
So I, I saw, like, you were working on some

17:33.200 --> 17:36.200
jet thing, like doing, can you?

17:36.200 --> 17:37.200
Can you tell us more?

17:37.200 --> 17:38.200
Yeah, yeah.

17:38.200 --> 17:39.200
And we're interesting, kind.

17:39.200 --> 17:42.200
So, um, but first, we focus on JVM,

17:42.200 --> 17:45.200
not on things like us in good cultures in profile,

17:45.200 --> 17:47.200
so just because we want to have something

17:47.200 --> 17:51.200
into OpenGK that's tested all those harmonium.

17:51.200 --> 17:54.200
JK tested that people that work on the OpenGK

17:54.200 --> 17:58.200
actually support, because we're in a changing

17:58.200 --> 18:01.200
JK's make, they don't crash our tool,

18:01.200 --> 18:04.200
because it's in the JVM, it's in the JVMK.

18:04.200 --> 18:06.200
So, the test run all the time, um,

18:06.200 --> 18:09.200
and that's really good to, we have for support solution.

18:09.200 --> 18:13.200
Um, and one of the things that we, that we're currently working on,

18:13.200 --> 18:16.200
um, this fingers trust we look at in my grindout,

18:16.200 --> 18:19.200
comes into JK 25 as a property cutem sample.

18:19.200 --> 18:22.200
That's what I'm working on, like, for the last few years.

18:22.200 --> 18:25.200
So, if you're working on profing lovers, which you can tell us in a minute on,

18:25.200 --> 18:28.200
we've won, have mixed settings, hopefully in the future,

18:28.200 --> 18:31.200
like with nature frames, especially with panome project panome

18:31.200 --> 18:35.200
coming in, where we have, um, much more,

18:35.200 --> 18:37.200
where we have to ease the ability to call,

18:37.200 --> 18:38.200
see functions all the time.

18:38.200 --> 18:41.200
And, per small, we're thinking about this profiling

18:41.200 --> 18:43.200
as slowly young, it's still young,

18:43.200 --> 18:45.200
but it's growing exponentially.

18:45.200 --> 18:48.200
So, uh, I'll, I'll start with CPUX.

18:48.200 --> 18:49.200
Yeah, exactly.

18:49.200 --> 18:51.200
Um, it's essentially, we're doing essentially the same way,

18:51.200 --> 18:52.200
so simple for all those.

18:52.200 --> 18:54.200
We use the timer frame functions that we use.

18:54.200 --> 18:59.200
It's a CPU, we use CPU timers,

18:59.200 --> 19:02.200
um, on Linux only, um, this gives, um,

19:02.200 --> 19:04.200
um, and how this works, essentially, that every,

19:04.200 --> 19:06.200
a few months, it's, of run time,

19:06.200 --> 19:08.200
the fastest computer has consumed,

19:08.200 --> 19:11.200
and CPU, um, we jump into a signal envelope,

19:11.200 --> 19:14.200
we trigger by the kernel, and we want to stack trace,

19:14.200 --> 19:17.200
but we only gain, like, the basic addresses of functions.

19:17.200 --> 19:20.200
We have to some polycharicate, we have to some polycharicate,

19:20.200 --> 19:22.200
later in a java effect,

19:22.200 --> 19:24.200
but this hopefully changes in the future,

19:24.200 --> 19:27.200
I think that, they're pretty cool people working on this,

19:27.200 --> 19:29.200
um, let's see how it goes.

19:29.200 --> 19:31.200
The causing is this, with this way,

19:31.200 --> 19:34.200
we, um, losing tools, the main hindpoints that we see

19:34.200 --> 19:36.200
and, uh, java are currently,

19:36.200 --> 19:38.200
so, uh, that it felt soundly,

19:38.200 --> 19:40.200
because we, we record when something fell,

19:40.200 --> 19:42.200
so it's good to know, um,

19:42.200 --> 19:45.200
and that it has known, real CPU time signboard.

19:46.200 --> 19:49.200
Um, yeah, yeah, profile levels, very fast.

19:49.200 --> 19:51.200
So this is something, this is not new concept.

19:51.200 --> 19:53.200
Like, um, Marcus here,

19:53.200 --> 19:56.200
that got somewhere in this email column of conversation,

19:56.200 --> 19:58.200
they started, like, more than 15 years ago,

19:58.200 --> 19:59.200
like, thread coloring,

19:59.200 --> 20:01.200
but, what is it about?

20:01.200 --> 20:04.200
We want to add more damages to profiling data,

20:04.200 --> 20:06.200
than just, like, CPU time span,

20:06.200 --> 20:08.200
the rocket memory located, right?

20:08.200 --> 20:10.200
So you can think about, like, you,

20:10.200 --> 20:12.200
you could attach something like,

20:12.200 --> 20:14.200
just, uh, API calls or,

20:14.200 --> 20:17.200
in generic operations to your profiling data,

20:17.200 --> 20:19.200
or distributed tracing ID.

20:19.200 --> 20:20.200
If you are using distributed traceers,

20:20.200 --> 20:21.200
custom ID.

20:21.200 --> 20:24.200
So then that, that will allow you slice and dice this data

20:24.200 --> 20:27.200
in, kind of, analytical fashion.

20:27.200 --> 20:30.200
So here, we have, like, example,

20:30.200 --> 20:32.200
we have this bunch of samples here,

20:32.200 --> 20:34.200
which are taken from the thread T1,

20:34.200 --> 20:37.200
uh, and this stack, yeah.

20:37.200 --> 20:40.200
So this, this is basically something we just called,

20:40.200 --> 20:41.200
a flame graph.

20:41.200 --> 20:43.200
It's a nightmare of its product.

20:43.200 --> 20:45.200
I'm initiated essentially visualization

20:45.200 --> 20:48.200
after runtime offer, of what a process does

20:48.200 --> 20:50.200
during runtime, so essentially,

20:50.200 --> 20:52.200
we throw all the time information away,

20:52.200 --> 20:54.200
um, like, the specific time information away

20:54.200 --> 20:59.200
and just plot the proportion of the runtime of each method

20:59.200 --> 21:02.200
in the stack, relative to the other methods.

21:02.200 --> 21:05.200
So, for example, we see here the top methods on the,

21:05.200 --> 21:07.200
above, I like the top calling methods,

21:07.200 --> 21:09.200
they call down, and the deeper we go,

21:09.200 --> 21:11.200
the deeper we go into the stack,

21:11.200 --> 21:14.200
and this critical visualization TTC all the time.

21:14.200 --> 21:15.200
Yeah, yeah.

21:15.200 --> 21:17.200
So, now we go back through the flame graph,

21:17.200 --> 21:19.200
and you can see that we have operations.

21:19.200 --> 21:22.200
We have rest of the UI, and on the left side,

21:22.200 --> 21:24.200
we have colors, and we can slap the same colors

21:24.200 --> 21:26.200
on top of the samples.

21:26.200 --> 21:28.200
So, and we don't have to,

21:28.200 --> 21:30.200
we don't have to stop at one dimension,

21:30.200 --> 21:32.200
we can add more dimensions, whatever makes sense for us.

21:32.200 --> 21:34.200
So, we can add, like, customer,

21:34.200 --> 21:37.200
and then we can recolor the samples,

21:37.200 --> 21:40.200
which are related to this particular big customer

21:40.200 --> 21:42.200
to a nice orange.

21:42.200 --> 21:46.200
And this is how the flame graph would look in real life.

21:46.200 --> 21:48.200
Like, it's not going to be nicely chronological order,

21:48.200 --> 21:50.200
it will be kind of mixed up,

21:50.200 --> 21:52.200
and you can do, like, okay.

21:52.200 --> 21:56.200
So, I want to see only the rest related samples.

21:56.200 --> 21:57.200
Boom, you have it.

21:57.200 --> 21:59.200
Like, you are focusing just on the rest.

21:59.200 --> 22:03.200
And then, when you add the second dimension for the customer,

22:03.200 --> 22:06.200
you can, like, laser focus just on the data,

22:06.200 --> 22:07.200
which might be critical for you.

22:07.200 --> 22:10.200
So, you can remove all the big data noise

22:10.200 --> 22:14.200
in the current profiling tools

22:14.200 --> 22:17.200
we are running on, like, huge services in the huge fleet.

22:17.200 --> 22:21.200
And in data dog, this, this is proving very invalidable.

22:21.200 --> 22:25.200
Like, we are having really good response from our users.

22:25.200 --> 22:28.200
This is not just something I'm coming up,

22:28.200 --> 22:31.200
or you're just going, really, really want to have labels.

22:31.200 --> 22:33.200
Auto is also looking at the labels.

22:33.200 --> 22:36.200
So, this, this, this, something which is getting traction.

22:36.200 --> 22:40.200
And we've been talking with a gallon here for a while,

22:40.200 --> 22:44.200
thinking about ways how this might get incorporating JFR.

22:44.200 --> 22:46.200
So, keep on watching this space.

22:46.200 --> 22:48.200
That might get interesting.

22:48.200 --> 22:50.200
And what about the auto profiler?

22:50.200 --> 22:52.200
It's, it's true in a good place.

22:52.200 --> 22:54.200
We'll see how it goes.

22:54.200 --> 22:56.200
It's probably, it might be the future we did,

22:56.200 --> 22:59.200
then, maybe to integrate GDK, JFR data.

23:00.200 --> 23:02.200
Maybe not, let's see.

23:02.200 --> 23:03.200
Yep.

23:03.200 --> 23:06.200
So, thank you for your sitting here.

23:06.200 --> 23:07.200
I'll listen to this.

23:07.200 --> 23:08.200
I'd most want to speak better.

23:08.200 --> 23:11.200
If I didn't, most will be honest, I'll tell you my book.

23:11.200 --> 23:13.200
And we're also back right here.

23:13.200 --> 23:15.200
You can find me over here.

23:15.200 --> 23:18.200
You have resources linked there with all the links I,

23:18.200 --> 23:20.200
or we used here.

23:20.200 --> 23:21.200
Thank you.

23:21.200 --> 23:22.200
Thank you again.

23:22.200 --> 23:23.200
Thank you.

23:23.200 --> 23:24.200
Thank you.

23:24.200 --> 23:25.200
Thank you.

23:25.200 --> 23:26.200
Thank you.

23:26.200 --> 23:27.200
Thank you.

23:27.200 --> 23:28.200
Thank you.

23:28.200 --> 23:30.200
Thank you.

23:30.200 --> 23:31.200
Thank you.

23:31.200 --> 23:32.200
Thank you.

