Playing
04. Triggering an OpenAI response

Transcript

00:00
As I mentioned in the last episode, whenever one of these messages has a role of assistant and the content is empty, what we then want to do is render a live wire component, which in its mount method will start to trigger a response.
00:14
And this isn't quite going to work how we want at the moment. We're going to switch this over to wire stream after we've done a little bit of digging and see how this works. But let's go ahead and push this after we have sent a message and let's add this to our stack over in chat. So in here, we're just going to basically copy the entire if statement and that out here.
00:35
And let's go and have a look at what we need to output. So it's just all of this stuff in here. So let's pull all of this into here. And there we go.
00:45
This is going to be assistant. So at the moment, if we just type hey, it's automatically going to respond with what we have put in there, of course. But we don't have any content inside of here just yet. So what we're going to do is break this up into a separate component.
01:01
So the component's job here is to make a request to the OpenAI API, and then the job is to stream this in. So I'm going to go ahead and cut all of this out and let's create out a new component. So let's make out a chat response component, which is specifically for OpenAI. And let's go ahead and register this just in here or output this in here.
01:23
So let's say chat response. And let's just open this up for now over here. And just paste all of that content in. So we shouldn't see any difference here if we just type hey, we get exactly the same thing back.
01:34
So inside of the main chat we already have where we're outputting this, let's think about the kind of data that we need to pass down here. The first thing that we need because we're outputting a component here is a key, so a wire key. And we're just going to set this to the key that we get back from this array since we're not working with anything within the database. That should fix any hydration issues.
01:56
The next thing and the really important thing we want to do is send through a prompt. So where does this prompt come from? Okay, so this isn't the best solution, but let's go ahead and just grab the messages that we already have. And let's grab the key that we're currently iterating through minus one.
02:14
So that's going to give us, if we come back over to chat, the last message that was sent by the user. Of course, we could do some fancy filtering here if we wanted to, and that would make it a little bit easier. Okay, let's go and just fix that up. So we don't need this messages, we can just say messages key.
02:31
And let's go over to our chat response component here and pick up this prompt. So let's say we've got our prompt in here, which might be an empty array. And let's go and just duplicate the response down to the chat response here. So let's output that prompt just to make sure everything is hooking up.
02:50
Okay, so I'm going to say, hey, and it basically should just repeat what I have. Okay, so can I assign array to property prompt? Yeah, of course. So over in our chat response here, prompt is actually an array, isn't it?
03:03
That's the array with the role and the content. So let's just adjust this accordingly and say prompt content. And let's head over, type hey, and yeah, it just repeats everything that we say, which is fine for now. Okay, so we'll take a look at just sending a standard response to OpenAI.
03:21
And then we will convert this over to wire stream once we've done some digging. So I'm just going to create out a get response method in here, which we can invoke. And we'll invoke this from our mount method. But we'll change this up to use wire stream a little bit later.
03:36
And let's just do exactly what we have done before. So let's go and grab out OpenAI from our container. Let's use chat. Let's use create.
03:46
And we want to start to send through any of the properties that we need, like the model. So let's say GPT-4. And we want to send the messages through. So we don't have these at the moment.
03:57
So it would be good to grab through all of the messages inside of here as well. So let's go ahead and grab out the messages. And we'll head over to chat here. And we'll just pass all of their messages through so we can read them.
04:11
Okay, great. So over in chat response now, we can just pass this messages through. And this kind of ties into how we're going to keep context because our messages are always going to contain all of the previous messages that have been sent.
04:24
So that's going to work really nicely. Okay, so I'm just going to go ahead and die dump on this response here. And what we want to do at least initially, we're not going to end up doing this, is within the mount lifecycle hook within LiveWire, just go ahead and invoke getResponse.
04:40
So just to recap, the prompt gets sent through, the messages get sent through. When this component mounts, so within that array that we're iterating through, it will go ahead and call getResponse, and it will send a request off. So let's try this out.
04:55
Let's say, hey, and we'll just wait a little while. And there we go, we get a response back. There's a little bit of a delay, which is why we're going to use streamed responses. But at least we know that we're triggering a request over here.
05:06
And yeah, sure enough, we get a response back. So from here, what we could technically do is when this component mounts, we could go and read the message directly within here. And we could put it into the content of this component,
05:21
which is each of the responses back from OpenAI. But we know that that is going to be super slow. Let's go ahead and do it anyway, just to see what this experience is like. And then we'll switch this over, as I've said before, to wire string.
05:36
OK, so let's go ahead and grab out the first item from choices here. And we'll convert this over to an array. Let's go back over and just type hey. And let's see what happens.
05:47
And yeah, sure enough, we get the message back in here. And we get the content of that message. And you can see that we've got the role and the content as well. So in here, what we could do is grab out the message.
06:00
And we could grab the content directly from here. And that should give us back the actual response. So if we were to do this in a standard way and have to wait for things, what we could do is create some sort of response
06:13
variable inside of this component. So let's just do that really quickly, just to see what this looks like. So let's create out this response property. And under chat response, let's just dump the response directly out here.
06:28
OK, if we head over and I say hey, sure enough, it takes a little while for this request to go through. And I could say link me to the LiveWire docs, which will probably be out of date.
06:40
And you can see it takes an incredible amount of time because we're not streaming this. It's working, which is great, so we can improve on this. But as you can see, it's not the best experience.
06:50
Now, one thing that we can do inside of here just before we move on is set a kind of base system message, which we saw before. So if we go ahead and open up our main chat component, what we could do is when this component mounts, we could push
07:05
to our messages, which gets sent through to that component, which will always contain a system message. So we can go ahead and take this and say system. And in content, we can give it sort of an initial prompt.
07:17
So you are a friendly web developer here to help. We could even say something like always start the conversation with hey, for example. So let's go over and see if this works.
07:36
Let's say hey. And there we go. You can see that it will always start with hey inside of here. We're not going to do that, but that just
07:43
gives you an idea of how you can set up that initial prompt to do what you need. OK, so let's go ahead and get rid of that. And we'll just keep it at this for now.
6 episodes 29 mins

Overview

Let’s learn how wire:stream can help us stream ChatGPT responses as they arrive, by building a chat interface with Livewire.

Each message we send and receive will be shown in chat history. It even remembers the conversation context.

Sure, there are a ton of ChatGPT wrappers out there, but by the end of this course, you’ll have wire:stream in your toolkit for future projects.

Alex Garrett-Smith
Alex Garrett-Smith
Hey, I'm the founder of Codecourse!

Comments

No comments, yet. Be the first to leave a comment.