Introduction
This blog post covers how to create an Android camera app from scratch, using the Camera2 API. The app will display camera preview and able to record the video to a local file.
Setup the Project
The first step is to create a project, using Kotlin as the baseline language. Add permission to AndroidManifest.xml
, above the application
block:
<uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.RECORD_AUDIO"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
Edit the Layout
Next step is to edit activity_camera.xml. We want add a FrameLayout
that matches the parent, and give it an ID container
:
<FrameLayout android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" tools:layout_editor_absoluteX="1dp" tools:layout_editor_absoluteY="1dp"> </FrameLayout>
MainActivity
Update MainActivity
to replace the FrameLayout
container with a new class, CameraFragment
, like this:
class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) savedInstanceState ?: supportFragmentManager.beginTransaction() .replace(R.id.container, CameraFragment.newInstance()) .commit() } }
A Fragment defines and encapsulates the behavior of a portion of the android UI. We want this so the camera behavior is implemented as part of the Fragment, instead of the entire Activity.
Now let’s implement “skeleton” CameraFragment
class in a separate Kotlin class file by the same name:
class CameraFragment : Fragment(), View.OnClickListener, ActivityCompat.OnRequestPermissionsResultCallback { override fun onCreateView( inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle? ): View? = inflater.inflate(R.layout.fragment_camera, container, false) override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) } override fun onResume() { super.onResume() } override fun onPause() { super.onPause() } override fun onClick(v: View?) { TODO("Not yet implemented") } companion object { fun newInstance(): CameraFragment = CameraFragment() } }
Android Studio should automagically import the required dependencies. The onClick()
is required by View.OnClickListener
, and the other functions are part of the Fragment
lifecycle functions.
The first lifecycle function that gets called is onCreateView()
. The view is created by calling the inflater.inflate()
function, which creates a View from the fragment_camera
layout, one that we have yet to define.
Defining the Fragment Layout
Let’s define it now by creating ‘fragment_camera.xml
‘ under layout, and fill it with the below XML:
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <ai.e_motion.camera_android.AutoFitTextureView android:id="@+id/texture" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentStart="true" android:layout_alignParentTop="true" android:layout_alignParentLeft="true" /> <FrameLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_alignParentStart="true" android:layout_below="@id/texture" android:background="@color/fragment_background"> <Button android:id="@+id/video" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:text="@string/record" /> <ImageButton android:id="@+id/info" android:contentDescription="@string/description_info" style="@android:style/Widget.Material.Light.Button.Borderless" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical|end" android:padding="@dimen/button_padding" android:src="@drawable/ic_action_info" /> </FrameLayout> </RelativeLayout>
The above layout XML file defines a custom widget, ai.e_motion.camera.android.AutoFitTextureView
, which is a TextureView
widget; and below it, a FrameLayout
with a Button
and an ImageButton
in it. foreground_color
, record
, and ic_action_info
are resources to be included. The fragment layout is specific for the camera UI.
AutoFitTextureView
needs to be implemented in code; so let’s create a new class and put it in a file by the same name:
package ai.e_motion.camera_android import android.content.Context import android.util.AttributeSet import android.view.TextureView import android.view.View /** * A [TextureView] that can be adjusted to a specified aspect ratio. */ class AutoFitTextureView @JvmOverloads constructor( context: Context, attrs: AttributeSet? = null, defStyle: Int = 0 ) : TextureView(context, attrs, defStyle) { private var ratioWidth = 0 private var ratioHeight = 0 /** * Sets the aspect ratio for this view. The size of the view will be measured based on the ratio * calculated from the parameters. Note that the actual sizes of parameters don't matter, that * is, calling setAspectRatio(2, 3) and setAspectRatio(4, 6) make the same result. * * @param width Relative horizontal size * @param height Relative vertical size */ fun setAspectRatio(width: Int, height: Int) { if (width < 0 || height < 0) { throw IllegalArgumentException("Size cannot be negative.") } ratioWidth = width ratioHeight = height requestLayout() } override fun onMeasure(widthMeasureSpec: Int, heightMeasureSpec: Int) { super.onMeasure(widthMeasureSpec, heightMeasureSpec) val width = View.MeasureSpec.getSize(widthMeasureSpec) val height = View.MeasureSpec.getSize(heightMeasureSpec) if (ratioWidth == 0 || ratioHeight == 0) { setMeasuredDimension(width, height) } else { if (width < ((height * ratioWidth) / ratioHeight)) { setMeasuredDimension(width, (width * ratioHeight) / ratioWidth) } else { setMeasuredDimension((height * ratioWidth) / ratioHeight, height) } } } }
It’s clear from the above code AutoFitTextureView
provides a method to set the view’s aspect ratio. It also overrides onMeasure()
to ensure that ratio is enforced.
Now we need to bind the AutoFitTextureView
, Button
and ImageButton
to the CameraFragment
class we created. First we need to define two variables in CameraFragment
:
private lateinit var textureView: AutoFitTextureView private lateinit var videoButton: Button ... override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) textureView = view.findViewById(R.id.texture) videoButton = view.findViewById<Button>(R.id.video).also { it.setOnClickListener(this) } view.findViewById<View>(R.id.info).setOnClickListener(this) }
The variables are bound using the findViewById()
function. In additions, the buttons also sets up the CameraFragment
class as the callback listener. Build and run the app. Right now the UI is not very exciting but there shouldn’t be any build error.
Before we can start working with the Camera2 API, we need to setup a thread to handle video processing. For that we will need to work with HandlerThread
, Handler
and Looper
.
Working with HandlerThread, Handler and Looper
HandlerThread
is a thread object with a queue for message or tasks, with the looper providing the execution context. User can insert tasks or messages through handlers. An Activity
has its own looper; and MainActivity
also has its own looper; however, MainActivity
is largely dedicated for UI actions. If video processing is added, it should run in a HandlerThread
.
To work with camera we will need to threads, one background thread for processing video, one for displaying the preview. We now create two utility functions: startBackgroundThread()
and stopBackgroundThread()
, and create two related variables in the CameraFragment
class:
private var backgroundThread: HandlerThread? = null private var backgroundHandler: Handler? = null ... private fun startBackgroundThread() { backgroundThread = HandlerThread("CameraBackground") backgroundThread?.start() backgroundHandler = Handler(backgroundThread?.looper) } private fun stopBackgroundThread() { backgroundThread?.quitSafely() try { backgroundThread?.join() backgroundThread = null backgroundHandler = null } catch (e: InterruptedException) { Log.e("camera_android", e.toString()) } }
When backgroundThread?.start()
is called, program execution forks to the “CameraBackground” thread, until the backgroundThread.join()
is called. We want to place these two functions in these two callbacks:
override fun onResume() { super.onResume() startBackgroundThread() if (textureView.isAvailable) { openCamera(textureView.width, textureView.height) } else { textureView.surfaceTextureListener = surfaceTextureListener } } override fun onPause() { closeCamera() stopBackgroundThread() super.onPause() }
We also need to ‘open’ the camera after the thread starts, and ‘close’ it right before the thread stops. For that reason, we also put in two ‘placeholder’ functions that we will define next.
Access the CameraManager
We will define two functions:
private val cameraOpenCloseLock = Semaphore(1) // semaphore for managing camera access private var cameraDevice: CameraDevice? = null . . . @SuppressLint("MissingPermission") private fun openCamera(width: Int, height: Int) { if (!hasPermissionsGranted(VIDEO_PERMISSIONS)) { requestVideoPermissions() return } val cameraActivity = activity if (cameraActivity == null || cameraActivity.isFinishing) return . . . val manager = cameraActivity.getSystemService(Context.CAMERA_SERVICE) as CameraManager } private fun closeCamera() { try { cameraOpenCloseLock.acquire() cameraDevice?.close() cameraDevice = null } catch (e: InterruptedException) { throw RuntimeException("Interrupted while trying to lock camera closing.", e) } finally { cameraOpenCloseLock.release() } }
In openCamera()
we first check for permission to use camera, then we save CameraFragment
activity in cameraActivity
to allow us access the activity’s looper later. Then we get a handle to the CameraManager
through the getSystemService()
function. Later we will add more code to openCamera()
. In closeCamera()
we closes the camera after obtaining the semaphore to ensure we don’t interrupt the camera in the middle of an operation. At this point cameraDevice
is not yet assigned anywhere; and we will add more code to closeCamera()
later. The above code also calls a couple of functions to check for camera usage permission, and obtain permission if necessary. We will discuss them next.
Permissions Checking
Permission checking is done by the below three functions:
private fun hasPermissionsGranted(permissions: Array) = permissions.none { checkSelfPermission((activity as FragmentActivity), it) != PermissionChecker.PERMISSION_GRANTED } private fun requestVideoPermissions() { if (shouldShowRequestPermissionRationale(VIDEO_PERMISSIONS)) { ConfirmationDialog().show(childFragmentManager, FRAGMENT_DIALOG) } else { requestPermissions(VIDEO_PERMISSIONS, REQUEST_VIDEO_PERMISSIONS) } } private fun shouldShowRequestPermissionRationale(permissions: Array) = permissions.any { shouldShowRequestPermissionRationale(it) }
The hasPermissonsGranted()
function calls the Array.none()
which returns ‘true’ if no elements of the array match the given predicate. In this case, if no request in the array are un-granted (note the double negative); or in other words, if all requests in the array are granted, then hasPermissionGranted()
will return ‘true’.
The shouldShowRequestPermissionRationale()
function calls Array.any()
which returns ‘true’ if at least one element in the array satisfies the predicate. In this case, the predicate is the shouldShowRequestPermissionRationale(it)
, a member function of Fragment
.
Working with CameraManager
Now, let’s go back to CameraManager
in the openCamera()
function. We want to obtain the camera (front or back), retrieve the camera’s characteristics, such orientation, video size, preview size, etc., and scale the textureView for the preview accordingly, and finally open the hardware device. For all that we will update the openCamera()
function to look like below:
private fun openCamera(width: Int, height: Int) { if (!hasPermissionsGranted(VIDEO_PERMISSIONS)) { requestVideoPermissions() return } val cameraActivity = activity if (cameraActivity == null || cameraActivity.isFinishing) return val manager = cameraActivity.getSystemService(Context.CAMERA_SERVICE) as CameraManager try { if (!cameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) { throw RuntimeException("Time out waiting to lock camera opening.") } val cameraId = manager.cameraIdList[0] // Choose the sizes for camera preview and video recording val characteristics = manager.getCameraCharacteristics(cameraId) val map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP) ?: throw RuntimeException("Cannot get available preview/video sizes") sensorOrientation = characteristics.get(SENSOR_ORIENTATION)!! videoSize = chooseVideoSize(map.getOutputSizes(MediaRecorder::class.java)) previewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture::class.java), width, height, videoSize) if (resources.configuration.orientation == Configuration.ORIENTATION_LANDSCAPE) { textureView.setAspectRatio(previewSize.width, previewSize.height) } else { textureView.setAspectRatio(previewSize.height, previewSize.width) } configureTransform(width, height) manager.openCamera(cameraId, stateCallback, null) } catch (e: CameraAccessException) { showToast("Cannot access the camera.") cameraActivity.finish() } catch (e: NullPointerException) { // Currently an NPE is thrown when the Camera2API is used but not supported on the // device this code runs. ErrorDialog.newInstance(getString(R.string.camera_error)) .show(childFragmentManager, FRAGMENT_DIALOG) } catch (e: InterruptedException) { throw RuntimeException("Interrupted while trying to lock camera opening.") } }
The boldfaced code above shows the key lines of code needed before manager.openCamera()
is called to retrieve the camera’s ideal preview dimensions so the textureView
can be adjusted. Other functions, such as showToast()
and the ErrorDialog
class are ancillary capability for exception handling.
The manager.openCamera()
function takes stateCallback
as an argument. The variable stateCallback
needs to be declared in the CameraFragment
class:
private val stateCallback = object : CameraDevice.StateCallback() { override fun onOpened(cameraDevice: CameraDevice) { cameraOpenCloseLock.release() this@CameraFragment.cameraDevice = cameraDevice startPreview() configureTransform(textureView.width, textureView.height) } override fun onDisconnected(cameraDevice: CameraDevice) { cameraOpenCloseLock.release() cameraDevice.close() this@CameraFragment.cameraDevice = null } override fun onError(cameraDevice: CameraDevice, error: Int) { cameraOpenCloseLock.release() cameraDevice.close() this@CameraFragment.cameraDevice = null activity?.finish() } }
From the code it s clear that cameraDevice
is initialized when the camera is opened, and closed when the camera is disconnected, or when there is a camera error. Note that semaphore is used to ensure the camera hardware is accessed without conflict. Also note that the textureView dimensions are restored in onOpened()
. Note that startPreview()
is called in onOpened()
. We will look at how camera preview is handled next.
Handling Camera Preview
Android camera video displays on a preview, which is our textureView. To start the preview we first ‘try’ to close any existing capture session, which is what closePreviewSession() does. Then we create a texture buffer to capture the camera image. The buffer is then packaged with cameraDevice's
preview request, and a capture session is created in the background thread.
private fun startPreview() { if (cameraDevice == null || !textureView.isAvailable) return try { closePreviewSession() val texture = textureView.surfaceTexture texture.setDefaultBufferSize(previewSize.width, previewSize.height) previewRequestBuilder = cameraDevice!!.createCaptureRequest(TEMPLATE_PREVIEW) val previewSurface = Surface(texture) previewRequestBuilder.addTarget(previewSurface) cameraDevice?.createCaptureSession(listOf(previewSurface), object : CameraCaptureSession.StateCallback() { override fun onConfigured(session: CameraCaptureSession) { captureSession = session updatePreview() } override fun onConfigureFailed(session: CameraCaptureSession) { if (activity != null) showToast("Failed") } }, backgroundHandler) } catch (e: CameraAccessException) { Log.e("CameraFragment", e.toString()) } } private fun updatePreview() { if (cameraDevice == null) return try { setUpCaptureRequestBuilder(previewRequestBuilder) captureSession?.setRepeatingRequest(previewRequestBuilder.build(), null, backgroundHandler) } catch (e: CameraAccessException) { Log.e("CameraFragment", e.toString()) } } private fun setUpCaptureRequestBuilder(builder: CaptureRequest.Builder?) { builder?.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO) } private fun closePreviewSession() { captureSession?.close() captureSession = null }
The capture session will call onConfigured()
which starts the preview update by calling updatePreview()
, which in turn starts “repeated” captures to the preview buffer. We also need to add closePreviewSession()
to the closeCamera()
function, like this:
private fun closeCamera() { try { cameraOpenCloseLock.acquire() closePreviewSession() cameraDevice?.close() cameraDevice = null } catch (e: InterruptedException) { throw RuntimeException("Interrupted while trying to lock camera closing.", e) } finally { cameraOpenCloseLock.release() } }
At this point, the list of CameraFragment variables are as follow:
private lateinit var textureView: AutoFitTextureView private lateinit var videoButton: Button private var backgroundThread: HandlerThread? = null private var backgroundHandler: Handler? = null private val cameraOpenCloseLock = Semaphore(1) // semaphore for managing camera access private var sensorOrientation = 0 private lateinit var previewSize: Size private lateinit var videoSize: Size private lateinit var previewRequestBuilder: CaptureRequest.Builder private var cameraDevice: CameraDevice? = null private var captureSession: CameraCaptureSession? = null //============================================================================================ // TextureView.SurfaceTextureListener] handles TextureView lifecycle events //============================================================================================ private val surfaceTextureListener = object : TextureView.SurfaceTextureListener { override fun onSurfaceTextureAvailable(texture: SurfaceTexture, width: Int, height: Int) { openCamera(width, height) } override fun onSurfaceTextureSizeChanged(texture: SurfaceTexture, width: Int, height: Int) { // configureTransform(width, height) } override fun onSurfaceTextureDestroyed(surfaceTexture: SurfaceTexture) = true override fun onSurfaceTextureUpdated(surfaceTexture: SurfaceTexture) = Unit } //============================================================================================ // StateCallback handling functions //============================================================================================ private val stateCallback = object : CameraDevice.StateCallback() { override fun onOpened(cameraDevice: CameraDevice) { cameraOpenCloseLock.release() this@CameraFragment.cameraDevice = cameraDevice startPreview() configureTransform(textureView.width, textureView.height) } override fun onDisconnected(cameraDevice: CameraDevice) { cameraOpenCloseLock.release() cameraDevice.close() this@CameraFragment.cameraDevice = null } override fun onError(cameraDevice: CameraDevice, error: Int) { cameraOpenCloseLock.release() cameraDevice.close() this@CameraFragment.cameraDevice = null activity?.finish() } }
Now build and run the camera, you should see the camera image, like below:
Next, we will add video recording feature to the app.
Video Recording
For video recording we start by implementing the onClick()
callback:
private var isRecordingVideo = false . . . override fun onClick(v: View) { when (v.id) { R.id.video -> if (isRecordingVideo) stopRecordingVideo() else startRecordingVideo() R.id.info -> { if (activity != null) { AlertDialog.Builder(activity) .setMessage(R.string.intro_message) .setPositiveButton(android.R.string.ok, null) .show() } } } }
The above code snippet creates a private variable inside the CameraFragment
class to remember if video is being recorded. It also calls two bookend functions: startRecordingVideo()
and stopRecordingVideos()
. Their implementation is as follow:
private var nextVideoAbsolutePath: String? = null private var mediaRecorder: MediaRecorder? = null . . . private fun startRecordingVideo() { // Check that valid cameraDevice and preview textureView available if (cameraDevice == null || !textureView.isAvailable) return try { closePreviewSession() setUpMediaRecorder() val texture = textureView.surfaceTexture.apply { setDefaultBufferSize(previewSize.width, previewSize.height) } // Set up Surface for camera preview and MediaRecorder val previewSurface = Surface(texture) val recorderSurface = mediaRecorder!!.surface val surfaces = ArrayList().apply { add(previewSurface) add(recorderSurface) } previewRequestBuilder = cameraDevice!!.createCaptureRequest(TEMPLATE_RECORD).apply { addTarget(previewSurface) addTarget(recorderSurface) } // Start a capture session // Once the session starts, we can update the UI and start recording cameraDevice?.createCaptureSession(surfaces, object : CameraCaptureSession.StateCallback() { override fun onConfigured(cameraCaptureSession: CameraCaptureSession) { captureSession = cameraCaptureSession updatePreview() activity?.runOnUiThread { videoButton.setText(R.string.stop) isRecordingVideo = true mediaRecorder?.start() } } override fun onConfigureFailed(cameraCaptureSession: CameraCaptureSession) { if (activity != null) showToast("Failed") } }, backgroundHandler) } catch (e: CameraAccessException) { Log.e("CameraFragment", e.toString()) } catch (e: IOException) { Log.e("CameraFragment", e.toString()) } } private fun stopRecordingVideo() { isRecordingVideo = false videoButton.setText(R.string.record) mediaRecorder?.apply { stop() reset() } if (activity != null) showToast("Video saved: $nextVideoAbsolutePath") nextVideoAbsolutePath = null startPreview() }
We need to create the mediaRecorder
instance in openCamera()
by adding the boldfaced line above the manager.openCamera()
call, like below:
configureTransform(width, height) mediaRecorder = MediaRecorder() manager.openCamera(cameraId, stateCallback, null)
We also need to release the mediaRecorder
in closeCamera()
by inserting the below two boldfaced lines:
private fun closeCamera() { try { cameraOpenCloseLock.acquire() closePreviewSession() cameraDevice?.close() cameraDevice = null mediaRecorder?.release() mediaRecorder = null } catch (e: InterruptedException) { throw RuntimeException("Interrupted while trying to lock camera closing.", e) } finally { cameraOpenCloseLock.release() } }
The setupMediaRecorder()
prepares recording:
@Throws(IOException::class) private fun setUpMediaRecorder() { val cameraActivity = activity ?: return if (nextVideoAbsolutePath.isNullOrEmpty()) { nextVideoAbsolutePath = getVideoFilePath(cameraActivity) } val rotation = cameraActivity.windowManager.defaultDisplay.rotation when (sensorOrientation) { SENSOR_ORIENTATION_DEFAULT_DEGREES -> mediaRecorder?.setOrientationHint(DEFAULT_ORIENTATIONS.get(rotation)) SENSOR_ORIENTATION_INVERSE_DEGREES -> mediaRecorder?.setOrientationHint(INVERSE_ORIENTATIONS.get(rotation)) } mediaRecorder?.apply { setAudioSource(MediaRecorder.AudioSource.MIC) setVideoSource(MediaRecorder.VideoSource.SURFACE) setOutputFormat(MediaRecorder.OutputFormat.MPEG_4) setOutputFile(nextVideoAbsolutePath) setVideoEncodingBitRate(10000000) setVideoFrameRate(30) setVideoSize(videoSize.width, videoSize.height) setVideoEncoder(MediaRecorder.VideoEncoder.H264) setAudioEncoder(MediaRecorder.AudioEncoder.AAC) prepare() } } private fun getVideoFilePath(context: Context?): String { val filename = "${System.currentTimeMillis()}.mp4" val dir = context?.getExternalFilesDir(null) return if (dir == null) { filename } else { "${dir.absolutePath}/$filename" } }
The function getVideoFilePath()
returns a file whose name is a time-stamped *.mp4 file stored at the path, Android/data/data/your_package/
.
Now you’re ready to run the camera again and it should be able to record.
References
- https://github.com/android/camera-samples/tree/master/Camera2VideoKotlin
- https://developer.android.com/reference/android/hardware/camera2/package-summary
- https://blog.nikitaog.me/2014/10/11/-looper-handler-handlerthread-i/