I’m in the process of hiring for a position and I have two candidates. It’s a tough call because both are very proficient but each has some unique attributes. I thought I might ask ChatGPT’s assistance with thinking it through.
I recorded myself talking through my thoughts on each one as I read through their resume and the Q&As that I’ve done with each. Then uploaded the audio file to the whisper-1 api for transcription (for this I’m using the OpenAI API).
Then I pasted the transcribed text into GPT4 and then prompted it with: “Above is my transcribed notes comparing two candidates for a position together. Help me think through this decision by asking me questions, one at a time.”
ChatGPT proceeded to ask me really good questions, one after the other. After a while I felt like it had got me to think about many new factors and ideas. After about 22 questions I’d had enough, so I asked it to wrap up and summarize our next steps, to which it spit out a bullet-point list of what we’d concluded and, what steps we should take next.
I don’t know if everyone is using ChatGPT this way, but this is a really useful feedback system.
ChatGPT is incredibly useful for summarising. The next step is to automate the whisper transcription, and add some tts for the replies. So you can talk to it directly
I’m curious did it not lose the plot at any point? I find after a certain amount of questions it rambles a bit - that’s on 3.5 though…
Indeed, it’s really helpful when you’ve got a lot of content and need to summarize and organize it. Combining that with letting it query me based on the context brought a lot more relevant info to the dialogue.
This time it never lost the plot. GPT4 is pretty stable for its whole context window
Gpt 4 has 8k tokens believe, gpt 3.5 may have less. A token is roughly a word so it can really pay off to tell chatgpt to be concise and not generate large amounts of complex text.
Annoyingly enough reforming large amounts of text is exactly what i need it for ¯_(ツ)_/¯