Playing
02. Setting up the OpenAI PHP client

Transcript

00:00
OK, so we're starting off here with a completely fresh project with Livewire pulled in. The first thing that we want to do is head over and get OpenAI installed and set up.
00:11
So we've got a PHP client here, which we can use to connect using our project API keys. So let's start with this first of all. If you haven't already, go ahead and sign up for an account here
00:23
and you'll be able to come into this API section and start to manage your projects. So what I'm going to do is go over to a project that I've already created for this.
00:33
But if not, go ahead and hit Create Project, give it a name, and you'll be booted into that. So let's head over to this Livewire chat project. And as you can see, I've got no API keys just yet.
00:42
Let's go ahead and create one now. And then we'll put this immediately into our Laravel project. OK, so we'll go ahead and just set this under this project.
00:51
We'll give all permissions and we'll create this secret key. OK, so this will be given to us and it won't be shown again. So we need to make sure that we copy this first of all. And we're going to head over to our project.
01:02
We're going to go straight over to our EMV directory. We'll come all the way down here and we'll create out an OpenAI secret environment variable and we'll put that in there.
01:13
OK, before we do anything else, let's create some config for this so we don't have to read directly from EMV. So I'm going to go into our config directory
01:21
and let's create out a new PHP file in here called OpenAI.php. And we'll just basically return an array here with the secret that we've just defined in here
01:32
and then we can easily access this. So that's OpenAI and secret. Great. OK, so next up is going ahead and actually installing a client
01:41
so we can make requests here. Let's go ahead and pull this in. We'll bind it to our container. We'll make an example request just
01:47
to see that this is working and then we're good to go. So let's come over to our project and we're going to require this in, of course, with Composer. Now, while that's finishing, let's just go over
01:57
to our app service provider and under this register area, we can go ahead and just bind this to our container. So let's go into our app. We'll just register this as a singleton called OpenAI
02:09
and then inside of this closure, we'll just boot this up and return it. So let's go ahead and use OpenAI here and client and let's go ahead and pass in our keys.
02:19
So let's say config and we've just created that file now. It's OpenAI and then .secret. Let's make an example request just over in our web routes. We'll just keep this simple.
02:28
So let's go ahead and create out a response variable here. We'll go ahead and fetch the OpenAI item out of our container and then we can just start to make a request. So since we're dealing with chat,
02:39
we're going to go ahead and call chat here and then we're just going to call create. Later on, we're going to be using create streamed because we're going to want a streamed response.
02:48
What will happen here when we send this request through is it will take a little bit of time to come back because we're not going to get it in those chunks that we expect within a chat interface.
02:58
OK, so inside of here, we need to pass two things. First of all, the model. So depending on when you're watching this, it could change. I'm going to choose GPT-4 and then we
03:07
have a bunch of messages that we want to send across. So what we can do in here is create out a nested array. And the first thing that we're going to define is the role. So role here could be system and we'll
03:20
go ahead and give this some content. And we're going to do something like you are a friendly bot to help with web development, something like that.
03:31
That just gives this an idea of what it needs to do. Then what we can do is change the role over to user and then we can go ahead and pass a response in. So both of these things will be sent over.
03:44
The context here will be defined and then we will ask a question within this user role. So link me to the Laravel docs and let's just leave it at that and see what we get back when we make a request to this.
03:57
So let's die dump on the response and see what comes back. OK, let's head over to the browser. We're going to give this a refresh
04:03
and obviously it's going to take a little while because the entire response is coming back here. And let's go over to this choices array and see what this contains.
04:12
So we've got a created response choice and we've got message here and there we go. Yeah, sure enough, the assistant, which is another role, has given us the content.
04:22
Sure, here is the link to the Laravel documentation. So it's super easy to send a request and get it back. But of course, we don't want to be waiting a huge amount of time for the entire response to come back,
04:32
which is the entire point of this course. So now that we've done that, let's go over and start to get our chat interface built up and we can send these requests in a slightly different way.
6 episodes 29 mins

Overview

Let’s learn how wire:stream can help us stream ChatGPT responses as they arrive, by building a chat interface with Livewire.

Each message we send and receive will be shown in chat history. It even remembers the conversation context.

Sure, there are a ton of ChatGPT wrappers out there, but by the end of this course, you’ll have wire:stream in your toolkit for future projects.

Alex Garrett-Smith
Alex Garrett-Smith
Hey, I'm the founder of Codecourse!

Comments

No comments, yet. Be the first to leave a comment.