Day 2 working with the OpenAI Codex Beta

Earlier this morning I did some work with the OBS application to record my efforts to mess around with the OpenAI Codex Beta. For the most part I have been working in the Codex JavaScript Sandbox asking the API to return things related to fractals and a bit of searching out some encryption related elements. The lossless video that was recorded produced about 30 gigabytes of AVI video file for five minutes of recording. That is an epic amount of data for such a short video. I’m still not entirely sure why the massive difference exists between indistinguishable quality and lossless. It really is about a 10x difference in file size between the two recording methods. Uploading that 5 minute video to YouTube took about 2 hours and the crunching that is about to happen in the background over on the YouTube side of the house will be epic. I’m going to record a few more little videos this weekend and it’s going to generate a huge amount of video data.

Day 1 working with the OpenAI Codex Beta

Welcome to day 1 of my efforts working with the OpenAI Codex Beta.

I’m starting out logged into https://beta.openai.com/dashboard

The first thing I noticed is that my interface is a little different from what I watched on Machine Learning Street Talk https://www.youtube.com/watch?v=1CG_I3vMHn4&t

Tim and Yannic are working with the Codex JavaScript Sandbox. My beta dashboard only takes me to the Playground area where you can experiment with the API. 

Well a couple quick Google searches on that one and it was user error on my part that kept me away from the sandbox. I did not know enough to go directly to the sandbox: https://beta.openai.com/codex-javascript-sandbox

I downloaded a copy of “The Declaration of Independence” and saved it as a PDF on my desktop. My big plan for tonight is to make an encryption application and have it encrypt that file from my desktop. It’s not a super ambitious plan, but I think it is a good place to start.