Apologies; typing on an iPhone so I’ll be a little brief.
Create an AVURLAsset with the URL of your video – which can be a local file URL if you like. Anything QuickTime can do is fine, so MOV or M4V in H.264 is probably the best source.
Query the asset for tracks of type AVMediaTypeVideo. You should get just one unless your source video has multiple camera angles of something like that, so just taking objectAtIndex:0 should give you the AVAssetTrack you want.
Use that to create an AVAssetReaderTrackOutput. Probably you want to specify kCVPixelFormatType_32BGRA.
Create an AVAssetReader using the asset; attach the asset reader track output as an output. And call startReading.
Henceforth you can call copyNextSampleBuffer on the track output to get new CMSampleBuffers, putting you in the same position as if you were taking input from the camera. So you can lock that to get at pixel contents and push those to OpenGL via Apple’s BGRA extension.