We are looking a senior ios objective C developer with camera function.
There are two important points for the recording and uploading procedure:
1) Uploading should not slow down recording
We want to record camera footage to the buffer and, about every 10 seconds, upload the
buffered video footage. But we don’t want to lose any camera footage. So when the 10 seconds
time limit is reached, the buffered footage should be passed to a new process to handle the
uploading while the buffer is immediately cleared and new recorded footage captured. This way
each uploaded recorded clip lines up the with previous one with minimal loss of footage.
We also don’t know how long the upload process will take. So it’s possible that a buffer may fill
up and new footage be ready to upload while the previous footage is still uploading. A queue
should be maintained of recordings waiting to be uploaded so not footage is lost.
2) We don’t want to recorded clip to end in the middle of a user talking.
So while a buffer should be cleared and recorded footage uploaded every 10 seconds, if a user
is in the middle of talking, we should wait - and continue to add footage to the buffer until the
user stops talking up - up to a maximum buffer length of 30 seconds.
To do this, the plugin should monitor the audio input levels and assume a user is talking when
the audio input level is high. When the audio input level has gone back down and stayed at a
medium or low level for 3 seconds, then the plugin can assume the user has stopped talking
The recorded video media type will be mp4, and uploaded like this api.
[login to view URL]