WEBVTT

00:00.000 --> 00:11.480
Everyone, my name is Alexander Krzanovski, and I'll be speaking on my colleague, Evgeny

00:11.480 --> 00:19.280
Michanik, who made the most part of the work, which I will talk about.

00:19.280 --> 00:31.280
Also, today, Robin Marks, say about cloning on server side, so I'm exactly from a web server,

00:31.280 --> 00:37.760
development site in Particle HPP-Poxia, so I'll be also talking about how cloning happens

00:37.760 --> 00:43.360
from HPP-Poxia site, and Particle In HP streams.

00:44.360 --> 00:56.360
To talk about HP streams, I would love to use for the website in page test resources,

00:56.360 --> 01:05.360
and I already made the prepaid report for the website, this LCP, I will talk more about

01:05.360 --> 01:24.360
HPP in terms of HP streams, and this, if we click on, well, yeah, let me check the connection.

01:35.360 --> 01:42.360
Thank you very much.

02:05.360 --> 02:11.360
Okay, so we finally have internet.

02:11.360 --> 02:14.360
Let's try again.

02:14.360 --> 02:16.360
Yeah, yeah, here we go.

02:16.360 --> 02:29.360
So, this LCP is good, and we, on this graph, we have a LCP-edged green line, about three seconds.

02:29.360 --> 02:42.360
The thing is that if we check the protocol for ZEM with HP2, and with this protocol, we have single

02:42.360 --> 02:49.360
CP connection, single CP, TOS connection inside, and we have a bunch of HP streams, each delivering

02:49.360 --> 02:57.360
a separate VB source, and this depends on C-B between the streams, and priority.

02:57.360 --> 03:04.360
So, to check the dependency, let's click on dependency graph, and have a go.

03:04.360 --> 03:16.360
So, in this called dependency stream, and in Particle, this is not zero, which is a point note,

03:16.360 --> 03:19.360
and there are lists of HP streams.

03:19.360 --> 03:22.360
So, those three streams are dependent on each other.

03:22.360 --> 03:34.360
Other streams like 11 and 15, siblings in dependency 3, and that means that they share bandwidth.

03:34.360 --> 03:45.360
Let's back to the start, and we also have priority, it's a weight for the stream.

03:45.360 --> 03:53.360
If we have a look on this values, this one, 47 weight, and another 220.

03:53.360 --> 03:56.360
So, fast to good.

03:56.360 --> 04:05.360
We had HP streams scheduling for a while, and in late 23, we started to optimize our stream schedule

04:05.360 --> 04:08.360
to get more performance for web clients.

04:08.360 --> 04:15.360
And at the time, the topic was well started, the good papers from Robin Marks about HP streams,

04:15.360 --> 04:20.360
operations scheduling, and concerns, so this is a very well started topic.

04:20.360 --> 04:27.360
And the more important thing is that the two FC1 is 75, 40, which is considered outdated,

04:27.360 --> 04:31.360
and represented by 9218.

04:32.360 --> 04:45.360
However, by late 23, it was more than one year, like in Particle 1, year, and half since 1992, 18 was published.

04:45.360 --> 04:52.360
So, it appeared for us that nobody used the new standard for HP2.

04:52.360 --> 05:02.360
Instead, everyone used it 75, 40, all standard for HP2, and the newer FC is used only for HP3.

05:02.360 --> 05:08.360
We had a lot of discussions internally, whether we need to see on-premise those standard,

05:08.360 --> 05:15.360
while the new one, which replaced the old one, but also in-premise stations used those standard.

05:15.360 --> 05:24.360
So, we said, okay, a punty nobody cares about HP2, quite a few HP2, so we had to implement 75, 40.

05:24.360 --> 05:31.360
We'll back to this topic later, but for now, let's consider what's the difference between the standard.

05:31.360 --> 05:37.360
The first and main difference in our, in the stock, it's a dependent CDM.

05:37.360 --> 05:48.360
So, 75, 40 is about dependents 3. We saw it on web page, test 4 for DMs, there's another example for much deeper dependency 3.

05:48.360 --> 06:06.360
And there are also some differences between weights and audency. For example, in 75, 40 we have weights, 3 in 90 to 18, we have audency, we can use bit difference, but for now we can ignore the differences.

06:06.360 --> 06:13.360
As this representation, we'll have a look on it on the next slide, there are two slides, we also added them.

06:13.360 --> 06:21.360
Later, but this interesting point about the out of priority, this one, things which could be named as a cloning.

06:21.360 --> 06:32.360
In particular, if we back to our example, here we see that a bunch of PNG resources can schedule more or less in parallel.

06:32.360 --> 06:38.360
But, intermediate reproxy, for example, your CDN can receive requests simultaneously.

06:38.360 --> 06:44.360
And storage can schedule the request to different upstream service, depends on installation.

06:44.360 --> 06:48.360
And different upstream service can require different times.

06:48.360 --> 06:55.360
So, technically, you can receive response for less prioritized requests earlier.

06:55.360 --> 07:09.360
And at this point, intermediate reproxy must consider what to do, whether to be a WFC component and passport, sending a full priority resource until higher priority resource is received.

07:09.360 --> 07:13.360
Or just send everything as it receives.

07:13.360 --> 07:21.360
Surely most of the implementation sent a slightly as possible to not introduce artificial delays.

07:21.360 --> 07:27.360
It's like, not the WFC component, but this proves performance.

07:27.360 --> 07:40.360
Next about repatriation, it's happens on both the standards, in both the standards, it's about way changes or audency changes like priorities.

07:40.360 --> 07:44.360
And the next one is about dependency reconstruction.

07:44.360 --> 07:51.360
On this figure, we see that stream A becomes dependent on its grandson G.

07:51.360 --> 07:56.360
So, we need to rebuild dependent C3 and place G on top of A.

07:56.360 --> 07:58.360
And we have two cases.

07:58.360 --> 08:02.360
The first one is that stream A is non-exclusive.

08:02.360 --> 08:07.360
It means that it can share bandwidth with other streams as we solve for PNG resources.

08:07.360 --> 08:16.360
In this case, we just replace CA chart with P basically construction.

08:16.360 --> 08:23.360
However, if A stream is exclusive, it means that it cannot share bandwidth.

08:23.360 --> 08:26.360
It doesn't allow any siblings in dependent C3.

08:26.360 --> 08:33.360
And in this case, the sibling F was a SA chart.

08:33.360 --> 08:39.360
So, as we see, in this simple example, we see that the three operations is quite complex.

08:39.360 --> 08:45.360
So, if you try to program this, you have non-drivaled three operations, right?

08:45.360 --> 08:52.360
Let's have a look at how five folks generates the HP streams.

08:52.360 --> 09:01.360
The first interesting thing is that, for you folks, like you see, the groups of paired ones on the side.

09:01.360 --> 09:06.360
And the first line is creating either stream without no weight.

09:06.360 --> 09:09.360
Without dependency without nothing, it's just either stream.

09:09.360 --> 09:18.360
And next claim is sent to point out the exact properties for the stream in particular weight.

09:18.360 --> 09:21.360
Dependency and x-visual.

09:21.360 --> 09:26.360
In this slide, this trace we received on TMPW site.

09:26.360 --> 09:31.360
And when five folks is used to browse some resources.

09:31.360 --> 09:38.360
And with this case, we see what five folks really sends to WebExivator.

09:38.360 --> 09:52.360
However, if we back to web page test, for example, considering resources, it's marked as exclusive.

09:52.360 --> 09:59.360
But regardless of which resources we're going to observe, it will be have exclusive folks set.

09:59.360 --> 10:06.360
I have no idea which folks version of the page test used, which engine bought, this is what we have.

10:06.360 --> 10:16.360
The next thing is that both the streams, 15 and 11, which actually share bandwidths.

10:16.360 --> 10:23.360
Now, mark it as exclusive. So let's find the streams, this is 11, it's exclusive.

10:23.360 --> 10:31.360
And each 15 is also exclusive.

10:31.360 --> 10:39.360
So apparently, Web page test builds depending on the cigar in non-of-shek and point weight.

10:39.360 --> 10:45.360
I believe it doesn't use any, I mean, the five folks sent it, so everything should be good.

10:45.360 --> 10:50.360
But, unfortunately, depends on C3, it has some internal bugs.

10:50.360 --> 10:54.360
So by all the streams, it cannot share bandwidths.

10:54.360 --> 10:58.360
They actually depend on each other.

10:58.360 --> 11:12.360
Next thing is that if we build the same dependency for home browser, we clearly see that in home, we see just a clear dependency.

11:12.360 --> 11:16.360
There's no bandwidth sharing between each stream, so to all.

11:16.360 --> 11:20.360
The same for other browsers, why cage and so on.

11:20.360 --> 11:30.360
And even for five folks, it's not clear what actually five folks doesn't expect, but actually we have a clear dependency.

11:30.360 --> 11:39.360
So technically, it means that even for 75, 40, we don't have any bandwidth sharing, or at least we should not have.

11:39.360 --> 11:44.360
And there are actually not so many cases when we do need bandwidth sharing.

11:44.360 --> 11:47.360
One of the case could be progressive gpx.

11:47.360 --> 11:50.360
Another case is five folks' case.

11:50.360 --> 11:53.360
It's unclear what five folks does, what to expect.

11:53.360 --> 12:04.360
And the next one is NGHDP2 library, and in particular two lot benchmark, which remains hp2 benchmark.

12:04.360 --> 12:10.360
And it does use bandwidth sharing in created stream.

12:10.360 --> 12:16.360
So technically, it's a rich, free space for the web API, which might benefit from bandwidth sharing.

12:16.360 --> 12:22.360
Probably everyone can imagine situation, you can benefit from bandwidth sharing.

12:22.360 --> 12:27.360
So we decided still to implement bandwidth sharing in a proper way.

12:28.360 --> 12:32.360
A couple of words about progressive gpx, we made the research.

12:32.360 --> 12:38.360
If anyone has feedback about this, I would love to know about web performance,

12:38.360 --> 12:44.360
Devroom, but from our research, it seems that nobody uses progressive gpx nowadays.

12:44.360 --> 12:53.360
And even resources, which are expected to use a lot of images, which must be shown very quickly, do not use progressive gpx.

12:53.360 --> 13:08.360
And the set would typically VP, or some optimization techniques are used to get good performance for web pages.

13:08.360 --> 13:14.360
That's back to bandwidth sharing for free folks again.

13:14.360 --> 13:18.360
So as we saw that free folks actually don't use excessive.

13:18.360 --> 13:22.360
At least now, it raises which we explore several versions of Firefox.

13:22.360 --> 13:27.360
There were no excessive fog set by Firefox.

13:27.360 --> 13:39.360
And from web page test, we actually see that, well, in this web page test, we see that most of the transfers,

13:39.360 --> 13:47.360
like the PNG connections were initiated simultaneously, but transfers happen more less in different times.

13:47.360 --> 13:57.360
Maybe, except, exceptions only, there was a bias, for example, for stream 5 and stream 5.

13:57.360 --> 14:06.360
So we see that there are some like this, and that seems that parallel bandwidth transfers.

14:06.360 --> 14:16.360
But this also received four Firefox days, and here we see that there are clear bandwidth sharing four Firefox.

14:16.360 --> 14:21.360
And basically, this very bad technique to achieve bad performance.

14:21.360 --> 14:36.360
So the first image, like technologistslogger.png, this green line, by 91 milliseconds, this apparently image crucial for LCP.

14:36.360 --> 14:43.360
And the image sharing bandwidth with other images, and we should not have been sharing bandwidth to improve our VCP.

14:43.360 --> 14:51.360
So more beneficial to firstly deliver first crucial image, and don't have to that in shape as the transfers.

14:51.360 --> 14:54.360
Not only that, but actually for the second image.

14:54.360 --> 15:00.360
It also not wise to share bandwidth with other different images, because we see it's png.

15:00.360 --> 15:09.360
Images not progress with GPEX, so user don't see any progress until images start to finish downloading.

15:09.360 --> 15:13.360
For a long time, user will see just a blank page without anything.

15:13.360 --> 15:20.360
And basically, this, but the sharing is not good for, like, with comparison symbol on the web.

15:20.360 --> 15:31.360
Another interesting thing about Firefox, and unique thing, is that Firefox builds a lot of idle streams, which are just placeholders in.

15:31.360 --> 15:40.360
So in this picture, streams 13, 35, 7, all the top-level streams are just placeholders.

15:40.360 --> 15:46.360
They never transfer any data, and the user just placeholders in dependency 3.

15:46.360 --> 15:54.360
Now, as a browser has this feature, and this unique for Firefox, and this just a plantation feature.

15:54.360 --> 16:00.360
And for sure, for folks sense a lot of representation frames.

16:01.360 --> 16:04.360
Let's summarize difference between the two standards.

16:04.360 --> 16:09.360
So, 7540 is outdated, replaced by 9218.

16:09.360 --> 16:13.360
For a while, it appears that nobody uses it for HP2.

16:13.360 --> 16:21.360
9218 is quite more attractive for developers, because it's simple.

16:21.360 --> 16:26.360
There's no sophisticated three operations for dependency 3.

16:26.360 --> 16:29.360
It's much simpler, and it's newer.

16:29.360 --> 16:34.360
So, technically, we should have June 1918.

16:34.360 --> 16:40.360
So, there were a lot of available open-source implementations.

16:40.360 --> 16:45.360
So, we started to investigate several of them with all compatible items.

16:45.360 --> 16:47.360
And first was an engineer.

16:47.360 --> 16:55.360
So, there's some technical techniques, and due to this technical and better CPU utilization.

16:55.360 --> 17:02.360
They create memory usage, so every stream, create a stream.

17:02.360 --> 17:11.360
Has we had more than one keyword by two data, just to schedule streams and find was the next stream we'll be scheduled.

17:11.360 --> 17:16.360
Another thing is that they use only 64 different groups.

17:16.360 --> 17:24.360
While we have 256 actual stream spirities, so there's some inaccuracies in the algorithm.

17:24.360 --> 17:30.360
We made a simple Python script, which compares the algorithm with ideal.

17:30.360 --> 17:35.360
Weighted for a queueing can be found that sometimes it's not a theoretical react.

17:35.360 --> 17:40.360
Again, it's a theoretical react, not necessary on real work work.

17:40.360 --> 17:43.360
So, we also move it to another implementations.

17:43.360 --> 17:50.360
And just to be true, users are weighted for a queue, which is true for a queueing.

17:50.360 --> 18:05.360
And basically, the performance in terms of quality of the scheduling is the best, because it's a natural weighted for a queueing, a queueerative program.

18:05.360 --> 18:07.360
But we didn't like the implementation.

18:07.360 --> 18:10.360
And it causes more data copies.

18:10.360 --> 18:16.360
It keeps the scheduling covariance spirit of stream.

18:16.360 --> 18:18.360
That's actually, so you have a stream.

18:18.360 --> 18:20.360
So you have a helping data selection.

18:20.360 --> 18:24.360
And CPU must access different memory reality.

18:24.360 --> 18:28.360
It's not good for optimal CPU cache.

18:28.360 --> 18:34.360
Since we do care about performance, we also need more efficient implementation.

18:34.360 --> 18:41.360
So we wanted something class fast as H2A, but as good quality as NGHTP2.

18:41.360 --> 18:49.360
We noticed that typically on the same dependency 3 level, we never have more than 100 streams.

18:49.360 --> 18:51.360
So depends on 3 can be very large.

18:51.360 --> 18:55.360
But on the same level, we never have a lot of streams.

18:55.360 --> 18:57.360
Typically, we have only one stream, right?

18:58.360 --> 19:02.360
Second, we noticed that the resource was frequent.

19:02.360 --> 19:13.360
So when we make one of the stream, we need to reinforce the stream to depend on the C3, to recall it's weight.

19:13.360 --> 19:25.360
So the first question was, which data structure we should use to keep streams and make weight-of-a-fail queue algorithm work.

19:25.360 --> 19:28.360
We considered several data structures.

19:28.360 --> 19:35.360
It's elastic binary trees, binary and fibonacci hips in such and such a way, like in H2A.

19:35.360 --> 19:39.360
And we ended up with the elastic binary tree.

19:39.360 --> 19:45.360
Let me talk more about this tree because it's very unique data structure.

19:45.360 --> 19:47.360
It's not described at all.

19:47.360 --> 19:48.360
Wikipedia.

19:48.360 --> 19:53.360
The data structure is used by HAPOX, also for scheduling purposes.

19:53.360 --> 19:58.360
It's invading by Willi Tareo, the original author of HAPOX.

19:58.360 --> 20:04.360
And the unique feature of the tree is that it's essentially binary logic stream.

20:04.360 --> 20:08.360
It means that if you have a large number, like 10 bits.

20:08.360 --> 20:12.360
And each tree represents a zero or one.

20:12.360 --> 20:17.360
And it means that in binary tree, on each level, you decide whether to go to the left branch,

20:17.360 --> 20:21.360
like for zero bit or to the right for one bit one.

20:21.360 --> 20:25.360
And this, so you can have a likelihood of 10.

20:25.360 --> 20:35.360
But in case of elastic binary tree, you can pack the bits and not so many levels introduced.

20:36.360 --> 20:43.360
The good thing about the binary tree is that it keeps all the nodes keeping data.

20:43.360 --> 20:50.360
So it doesn't access different memory locality locations for CPUs.

20:50.360 --> 21:00.360
So when you use the data structure to pick up next stream, the CPU accesses the memory location on the ones.

21:00.360 --> 21:06.360
And, naturally, it's not balance it binary tree.

21:06.360 --> 21:12.360
So it works very well for small data sets, not good for large data sets.

21:12.360 --> 21:18.360
But since we have not so many streams on the same level, it works the best for us.

21:18.360 --> 21:23.360
We made the Mikubinchmark for 100 elements.

21:23.360 --> 21:27.360
Maybe three was, some were slow as an extra insertions.

21:27.360 --> 21:32.360
But much faster than a heap implementation, like Fibonacci and binary heap.

21:32.360 --> 21:36.360
So we choose an elastic binary tree.

21:36.360 --> 21:39.360
So we finish it with data structure.

21:39.360 --> 21:42.360
Let's move with algorithm for weighted firecure.

21:42.360 --> 21:50.360
And Kazuhoko 10 years ago presented schedule algorithm for it to end one of them.

21:51.360 --> 21:57.360
Weighted firecure is well-studied topic, and it's a pair of many applications,

21:57.360 --> 22:05.360
even in the latest Linux kernel schedule, EVDF, also used weighted firecure.

22:05.360 --> 22:13.360
There are different papers, different research articles, but most of them are about the same ideas,

22:13.360 --> 22:20.360
why we have a deficit or a penalty, like a log in case of the Linux kernel schedule.

22:20.360 --> 22:25.360
And the fire algorithms is very intuitive.

22:25.360 --> 22:31.360
So on this case, we create a penalty.

22:31.360 --> 22:36.360
So with penalty, we take a percent.

22:36.360 --> 22:41.360
It's how much data was actually sent last time.

22:42.360 --> 22:44.360
We multiplied from any constant.

22:44.360 --> 22:46.360
In this case, it's 256.

22:46.360 --> 22:51.360
You can choose any other constant, and this is some pending, graduated value.

22:51.360 --> 23:01.360
And the next one, when we compute the cycle, this exactly was the next stream to be sent next.

23:01.360 --> 23:06.360
As more cycle is, the latest stream will be sent.

23:06.360 --> 23:11.360
So as a larger penalty, we have the weight that we are going to be sent.

23:11.360 --> 23:16.360
As more weight the stream has, it's already going to be sent.

23:16.360 --> 23:17.360
It's very intuitive.

23:17.360 --> 23:21.360
And next, we use updated pending value.

23:21.360 --> 23:24.360
It's just a residual of penalties.

23:24.360 --> 23:30.360
So for cycle, we use penalty by division, and for a graduated value,

23:30.360 --> 23:33.360
the pending query have a residual value.

23:33.360 --> 23:39.360
This example of only two streams, A and B, with Pyreaches 10 and 5.

23:39.360 --> 23:44.360
Pay attention that A and B sends data with different chunks.

23:44.360 --> 23:53.360
First sends 100, third sent, 150, and the last transmission is 150.

23:53.360 --> 23:59.360
So being sent with different chunks, we end up with transmission of the same amount.

23:59.360 --> 24:07.360
And the algorithm starts to transmit A and B one after another.

24:07.360 --> 24:09.360
So this is true for a queueing.

24:09.360 --> 24:15.360
So both the streams start to be transmitted from the first transmission.

24:15.360 --> 24:22.360
The key difference between this algorithms, also used in GSTP2.

24:22.360 --> 24:26.360
And H2 is that H2 ignores data sent.

24:26.360 --> 24:33.360
So it only uses sends frames, and as much data as you have,

24:33.360 --> 24:38.360
can be sent by a stream according to the schedule.

24:38.360 --> 24:40.360
So it's just about ordering.

24:40.360 --> 24:46.360
So you can imagine that in this case, we have a streams A and B.

24:47.360 --> 24:51.360
B is double less Pyreaches than A.

24:51.360 --> 24:58.360
But suppose that B sends one iteration 10 times more data than A.

24:58.360 --> 25:06.360
So in this algorithm, B will be consuming five times more bandwidth than A.

25:06.360 --> 25:09.360
B is still less Pyreaches.

25:09.360 --> 25:13.360
So it's also about a queueing of algorithm.

25:13.360 --> 25:20.360
So in our case in GSTP2, we know how much data we sent.

25:20.360 --> 25:23.360
We can just adjust the content.

25:23.360 --> 25:28.360
Very little is this and this is about Pyreaches.

25:28.360 --> 25:31.360
So it was like middle 24.

25:31.360 --> 25:37.360
We implemented the new schedule, the very first schedule, very first schedule.

25:37.360 --> 25:43.360
And Firefox and Home are also that they did move to 92.

25:43.360 --> 25:44.360
18.

25:44.360 --> 25:50.360
And with this standard, we found that we're not alone,

25:50.360 --> 25:53.360
who didn't implement 92.

25:53.360 --> 25:54.360
18.

25:54.360 --> 25:58.360
So in GSTP2 supports this feature.

25:58.360 --> 26:01.360
H2 doesn't support at all.

26:01.360 --> 26:07.360
And the next supports it, but not in terms of 92.

26:07.360 --> 26:08.360
18.

26:08.360 --> 26:14.360
So a seven implementation, while browser started to use the feature.

26:14.360 --> 26:20.360
Firefox 128 was the first version who started to use the feature by default.

26:20.360 --> 26:27.360
126 and 127 were able to use 92.

26:27.360 --> 26:30.360
18 if several are also the support.

26:30.360 --> 26:35.360
But by default, they don't announce the feature.

26:35.360 --> 26:41.360
So in terms of Firefox, Firefox behave just the same as previously.

26:41.360 --> 26:46.360
But it adds settings, not if she is 75, 40,

26:46.360 --> 26:51.360
means as streams in connection going to use the new conduct.

26:51.360 --> 26:55.360
And also to the previous scheme, we as wait without either schemes.

26:55.360 --> 27:01.360
It adds urgency and appreciation of luck.

27:01.360 --> 27:09.360
So with this standard, we, if seven implementation is 75, 40,

27:09.360 --> 27:13.360
compare and only, it can use wait and exit the fog.

27:13.360 --> 27:20.360
If it's under sent 92, 18, that it can use urgency and increase the fog.

27:20.360 --> 27:27.360
We, at the same time, we have failed logarithmic schedule.

27:27.360 --> 27:35.360
We did a lot of optimizations on memory locations to improve CPU cage usage streams

27:35.360 --> 27:36.360
player locations.

27:36.360 --> 27:44.360
So if your webpage creates 10 streams, then the first 10 streams are nearly for free.

27:44.360 --> 27:49.360
But others will involve more memory resources.

27:49.360 --> 27:54.360
Also, if there are no any bandwidth sharing,

27:54.360 --> 27:59.360
or any layer, we don't want any time on resource sharing, whatever.

27:59.360 --> 28:06.360
And also by default, it's 92, 18, compared by in terms of wait.

28:06.360 --> 28:11.360
And if you use progressive impacts or like web API,

28:11.360 --> 28:17.360
which can benefit from bandwidth sharing, you can use a configuration option.

28:17.360 --> 28:24.360
The next thing about impasse is that it's built into the links to your paper stack.

28:24.360 --> 28:29.360
And essentially, HP layer is called by HP callback.

28:29.360 --> 28:37.360
So on each iteration HP schedules, no precisely how much data HP is able to send.

28:37.360 --> 28:43.360
It can reach the HP congestion window and then also the receive site window.

28:43.360 --> 28:51.360
And form a TOS record and HP frame of optimal size for the current state on HP level.

28:51.360 --> 28:57.360
The extensions for HP proxy for HP drawer and divine for engineers,

28:57.360 --> 29:02.360
but they use poker first values, so it's asynchronous stuff.

29:03.360 --> 29:09.360
And if you form TOS records non-optimal size,

29:09.360 --> 29:12.360
it could be very crucial for latency.

29:12.360 --> 29:18.360
With TOS records, we have a trailer, so each TOS records finishes with a trailer.

29:18.360 --> 29:24.360
And trailer contains authentication data required to start the encryption of the record.

29:24.360 --> 29:30.360
So for example, if your application server forms like 10 TOS records,

29:30.360 --> 29:33.360
but HP is able to deliver only 1 KW,

29:33.360 --> 29:37.360
then the head of TOS records is delivered to the client,

29:37.360 --> 29:43.360
but the TOS records stays in the center buffer.

29:43.360 --> 29:46.360
And receiver cannot start TOS decryption,

29:46.360 --> 29:50.360
because it doesn't have TOS trailer.

29:50.360 --> 29:53.360
And only when HP delivers the trailer,

29:53.360 --> 29:55.360
the decryption process can start.

29:56.360 --> 30:02.360
So with good knowledge about CP state, we can avoid such situation.

30:02.360 --> 30:05.360
So that's the thing which I wanted to cover is security,

30:05.360 --> 30:11.360
that's why a couple of security issues involving HP stream mechanisms.

30:11.360 --> 30:16.360
The one is pretty old, like 10 years ago, about dependents C3,

30:16.360 --> 30:21.360
so if your website implementation isn't a real component,

30:21.360 --> 30:26.360
then a attacker can send built dependents C3

30:26.360 --> 30:29.360
like when root node depends on the lift.

30:29.360 --> 30:32.360
So we have a cycle in three,

30:32.360 --> 30:40.360
and the schedule goes through the infinite loop trying to get the next stream.

30:40.360 --> 30:46.360
The next one is about priority frame and the related vulnerability

30:47.360 --> 30:51.360
like last year about control frames,

30:51.360 --> 30:55.360
fluids that basically with priority frames,

30:55.360 --> 30:59.360
we saw that priority frames involves the delivery construction,

30:59.360 --> 31:03.360
relatively heavyweight operation,

31:03.360 --> 31:06.360
so it's a good vector for DOS attack,

31:06.360 --> 31:10.360
and sending a lot of priority frames,

31:10.360 --> 31:15.360
and also other control frames could lead to DOS vulnerability.

31:15.360 --> 31:20.360
Typically, this can be mitigated with limits like limiting

31:20.360 --> 31:25.360
the maximum number of open HP streams.

31:25.360 --> 31:28.360
I think all the implementation does this,

31:28.360 --> 31:30.360
so you limit the size of dependents C3,

31:30.360 --> 31:35.360
and essentially you get the most cheaper triaporation.

31:35.360 --> 31:37.360
And also typically,

31:37.360 --> 31:44.360
implementations limit number of control frames,

31:44.360 --> 31:47.360
and that's so from our site,

31:47.360 --> 31:49.360
we have a week here.

31:49.360 --> 31:55.360
I discovered more about research in HP stream schedule

31:55.360 --> 32:00.360
with a pattern code implementing H2O.

32:00.360 --> 32:03.360
Algorithm, and that's the hypothesis and open source.

32:03.360 --> 32:07.360
And finally, if you enjoy hacking Linux kernel code,

32:07.360 --> 32:10.360
doing some sophisticated algorithm programming,

32:10.360 --> 32:13.360
and so on, we definitely want to have from you,

32:13.360 --> 32:15.360
so we're hiding.

32:15.360 --> 32:16.360
So that's all.

32:16.360 --> 32:19.360
I think we have some time for questions.

32:19.360 --> 32:20.360
Yep.

32:20.360 --> 32:22.360
Thank you for listening.

32:22.360 --> 32:38.360
Questions?

32:38.360 --> 32:42.360
So beyond what you presented the day for the algorithms,

32:42.360 --> 32:45.360
is there anything else that, for example, for the H cases,

32:45.360 --> 32:49.360
or for the frustrations where we have packet loss,

32:49.360 --> 32:52.360
and data needs to be sent again on a stream.

32:52.360 --> 32:55.360
Or is it outside of the scope, what you've got?

32:55.360 --> 32:57.360
Yeah, I think it's outside of the scope,

32:57.360 --> 32:59.360
because we do not emit CPU logic at all,

32:59.360 --> 33:01.360
so we just work on CPU co-bets,

33:01.360 --> 33:04.360
and we let CPU do its work.

33:04.360 --> 33:07.360
So it's under the, yeah, yeah.

33:07.360 --> 33:08.360
We do not emit CPU logic at all,

33:08.360 --> 33:10.360
so we just work on CPU co-bets,

33:10.360 --> 33:12.360
and we let CPU do its work,

33:12.360 --> 33:15.360
so it's under the, yeah, yeah.

33:15.360 --> 33:18.360
Yeah, it's a complete separate layer,

33:18.360 --> 33:20.360
it's for any application service,

33:20.360 --> 33:23.360
but we just call it by CPU co-bets,

33:23.360 --> 33:25.360
so that's the difference.

33:25.360 --> 33:27.360
Yeah, thank you for question.

33:27.360 --> 33:41.360
No?

33:41.360 --> 33:46.360
Okay, thank you.

33:46.360 --> 33:47.360
Thank you.

33:47.360 --> 33:48.360
Thank you so much.

33:48.360 --> 33:49.360
Thank you.

33:49.360 --> 33:50.360
Thank you.

33:50.360 --> 33:53.360
Thank you.

