This episode is for members only

Sign up to access "Fast Infinite Scroll with Livewire" right now.

Get started
Already a member? Sign in to continue
03. Working with chunks


Now, it's really difficult to talk
about how this solution works without drawing a bunch of diagrams, and I don't generally do that. So we're going to dive straight in and talk about how we can chunk these records
and load each chunk as we go. So we don't even have a component yet. I'm going to go ahead and make out a article index component. And we're going to have to have quite a few components in here
to make all this sort of fit together. But this will be the nice starting point. So I'm just going to dump this over in the dashboard page here.
So let's say Livewire article index. And that's pretty much it. We'll bump straight over to the article index class and start to talk about how we can pull this in.
So let's go ahead and just implement the mount lifecycle hook within Livewire to fetch our records how we maybe usually would. So usually, we would go ahead and maybe assign this
to a collection here. So let's just say public and we'll say collection articles. And then we would go ahead and initiate that in here. Let's just pull that collection class in from any of these.
So we would say this articles. And we would go ahead and say article. And of course, order these in the latest order. And then we would just maybe get them.
So that could work. But what we are now doing is, first of all, loading in all data, which we don't want to do. The solution that we saw before from the introduction
uses pagination to go ahead and paginate these by a certain page. And then it pushes onto a collection when we get to the next page.
But again, that's the reason it slows, because we're pushing more data into here. And eventually, it's going to get slower. So what we're actually going to do here
is we're only going to pluck out the ID of the record. So this might seem a bit mad, but bear with me. And then we're going to go ahead and chunk these by 10. And then we can go ahead and say get.
In fact, no, we're going to go ahead and say to array. That makes more sense. So this now becomes an array of chunks, which we're going to call.
And we'll just set that as a empty array by default. And of course, just assign that properly. OK, so let's just see what these chunks actually look like. In fact, we could just die dump the chunks down here.
That might give us a little bit of a better view of this. Let's head over and just give this a refresh. OK, so we know that we've got quite a few records in here. We've got 100.
And each of these now contains the first 10, the next 10, and so on and so forth. So it's kind of like pagination, but we're not loading in all of the data, the actual title, the teaser,
all of that kind of stuff. We're just loading in the IDs. So not very slow to do. This is pretty fast to actually get this working.
Now, what are we going to do with these chunks? Well, we're going to iterate over them in batches. So let's go over to our article index. And let's go ahead and just start
to iterate over these chunks. So we'll go ahead and say we'll use a for loop here, because we're working with an array. And we're going to set chunk initially to 0.
Now, we need a condition here of when we should be loading this stuff in. So I'm just going to say chunk less than or equal to 1. So I'm going to assume this is the page.
Effectively, what we want to do is load these IDs within batches based on the page number that we're on, even though we're not actually using pagination. So we're going to go ahead and increment the chunk here.
We are going to end the for here. And then let's just dump the chunk out here. So let's just dump the current chunk that we're working with. Now, obviously, this is just going
to show 1 on the page, or 0 in this case. Obviously, it's 0 indexed. That's the first chunk that we want to load. So think of it like this.
We've got the first chunk with an index of 0. We fetch all the IDs from that chunk, which we have stored in that chunks array. We load them in from the database.
And then when we scroll down, however we trigger that, whether it's with the Intersection Observer API or by clicking a button, we make a request to load the next chunk.
What that means is up front, we're loading all of the IDs that we need. So we're not pushing anywhere. And we're not doing anything like that.
But we're just making a database request for 10 records every single time we come down. So we're not pushing to an array of records that's eventually going to get huge and slow
each of the requests down. We are just loading 10, then loading 10, then loading 10, and not doing much else. So with that in mind, how do we limit this?
Well, let's go ahead and add a really simple iteration to this or implementation to this to go ahead and increment the page number when we click a button.
So I'm going to go ahead and create a standard button down here. Let's just manually do that in there. This is going to be to load more.
What it will do at the moment is just push the amount of chunks that we're loading. It won't actually do much. So this is going to be some sort of page number
that we increment. And this will increment the page number. We don't have that available just yet. But we can go ahead and add this over in here.
So let's add the page in here. And of course, by default, this is going to be 1. So now what we can do is add in a method to increment the page number.
So we could call this load more. It doesn't really matter. And we just want to go ahead and take the page and reset it to the page plus 1 or, of course,
just increment it normally. And then we can invoke this from here using Livewire. So wire click, load more. So now what's going to happen is because we've
added the page within this for and within the condition, when we load more, we just load the next chunk in. So you can see that, obviously, we've got 10 chunks. If I keep doing this, it's, of course,
just going to keep loading chunks that aren't there because we don't have any kind of condition here to stop this from actually happening. So what we can do is inside of load more,
we can do some sort of page check to check if we have any more records. So let's go ahead and create our method in here, which is going to be handy later when we do this
within our template called has more pages. So this is kind of like our own pagination implementation, but just really loosely within a Livewire component. So we're going to go ahead and say, well, this page,
now how do we determine pages within what we're building? Well, it's just the amount of chunks. So we're going to say, well, is that less than the count of the chunks that we have inside of here?
We've got 10 chunks. Technically, we've got 10 pages. So that determines if we have any more pages left. Now for load more, what we can do
is we can protect this by just doing a return, an early return, if we don't have any more pages. So this has more pages, and we shouldn't get any more chunks loaded once we've done this.
So I can hit load more, and it should just stop doing anything. I'm clicking on this. Of course, nothing's happening.
Now, we can actually use this has more pages method over in here to stop this load more. Now, of course, we're eventually going to implement an intersection observer to get this working.
But for now, let's just get rid of this and just use this as our sort of load more thing. So has more pages? No.
Let's go ahead and just click. There we go. We're pretty much halfway there. So now what we need to do is figure out
in the next episode, within each of these batches, how do we go ahead and get their records to actually show them properly? Well, let's jump over.
And now that we've hopefully understood the concept of these batches and how we're loading stuff in and how that makes it fast, let's go ahead and actually get some data on the page.
5 episodes 27 mins


Infinite scrolling in Livewire can be simple to implement, but things slow down as more data rolls in.

In this short course, we’ll look at a technique to load batches of records upfront, then fetch and render each batch only when its needed.

The result? No drop in performance, regardless of how many records a user scrolls through.

Alex Garrett-Smith
Alex Garrett-Smith
Hey, I'm the founder of Codecourse!


No comments, yet. Be the first to leave a comment.